2026-02-18 02:26:23.212078 | Job console starting 2026-02-18 02:26:23.223968 | Updating git repos 2026-02-18 02:26:23.325738 | Cloning repos into workspace 2026-02-18 02:26:23.556846 | Restoring repo states 2026-02-18 02:26:23.580684 | Merging changes 2026-02-18 02:26:23.580715 | Checking out repos 2026-02-18 02:26:23.837120 | Preparing playbooks 2026-02-18 02:26:24.538556 | Running Ansible setup 2026-02-18 02:26:28.982131 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-18 02:26:29.775697 | 2026-02-18 02:26:29.775866 | PLAY [Base pre] 2026-02-18 02:26:29.793868 | 2026-02-18 02:26:29.794040 | TASK [Setup log path fact] 2026-02-18 02:26:29.818635 | orchestrator | ok 2026-02-18 02:26:29.836960 | 2026-02-18 02:26:29.837099 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-18 02:26:29.882083 | orchestrator | ok 2026-02-18 02:26:29.895535 | 2026-02-18 02:26:29.895645 | TASK [emit-job-header : Print job information] 2026-02-18 02:26:29.955628 | # Job Information 2026-02-18 02:26:29.955916 | Ansible Version: 2.16.14 2026-02-18 02:26:29.956033 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-18 02:26:29.956094 | Pipeline: periodic-midnight 2026-02-18 02:26:29.956134 | Executor: 521e9411259a 2026-02-18 02:26:29.956168 | Triggered by: https://github.com/osism/testbed 2026-02-18 02:26:29.956204 | Event ID: 9373a76413ae48f49f3438728df72274 2026-02-18 02:26:29.965717 | 2026-02-18 02:26:29.965853 | LOOP [emit-job-header : Print node information] 2026-02-18 02:26:30.099331 | orchestrator | ok: 2026-02-18 02:26:30.099645 | orchestrator | # Node Information 2026-02-18 02:26:30.099681 | orchestrator | Inventory Hostname: orchestrator 2026-02-18 02:26:30.099706 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-18 02:26:30.099727 | orchestrator | Username: zuul-testbed03 2026-02-18 02:26:30.099747 | orchestrator | Distro: Debian 12.13 2026-02-18 02:26:30.099770 | orchestrator | Provider: static-testbed 2026-02-18 02:26:30.099790 | orchestrator | Region: 2026-02-18 02:26:30.099811 | orchestrator | Label: testbed-orchestrator 2026-02-18 02:26:30.099831 | orchestrator | Product Name: OpenStack Nova 2026-02-18 02:26:30.099850 | orchestrator | Interface IP: 81.163.193.140 2026-02-18 02:26:30.125845 | 2026-02-18 02:26:30.126084 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-18 02:26:30.627662 | orchestrator -> localhost | changed 2026-02-18 02:26:30.636716 | 2026-02-18 02:26:30.636840 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-18 02:26:31.729727 | orchestrator -> localhost | changed 2026-02-18 02:26:31.755327 | 2026-02-18 02:26:31.755507 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-18 02:26:32.064370 | orchestrator -> localhost | ok 2026-02-18 02:26:32.072120 | 2026-02-18 02:26:32.072249 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-18 02:26:32.102815 | orchestrator | ok 2026-02-18 02:26:32.120307 | orchestrator | included: /var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-18 02:26:32.128486 | 2026-02-18 02:26:32.128583 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-18 02:26:33.997853 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-18 02:26:33.998531 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/work/dcb102d1513646059c8b4086c535c802_id_rsa 2026-02-18 02:26:33.998660 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/work/dcb102d1513646059c8b4086c535c802_id_rsa.pub 2026-02-18 02:26:33.998743 | orchestrator -> localhost | The key fingerprint is: 2026-02-18 02:26:33.998815 | orchestrator -> localhost | SHA256:jObtc6NMa3M0EKCzdTUVHYi9WlGBCrTGJxZJ8rt3Dh0 zuul-build-sshkey 2026-02-18 02:26:33.998949 | orchestrator -> localhost | The key's randomart image is: 2026-02-18 02:26:33.999045 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-18 02:26:33.999111 | orchestrator -> localhost | | o++.o+o*+o | 2026-02-18 02:26:33.999174 | orchestrator -> localhost | | . +++..= . | 2026-02-18 02:26:33.999232 | orchestrator -> localhost | | o . Oo.. o | 2026-02-18 02:26:33.999288 | orchestrator -> localhost | | + *.+. o | 2026-02-18 02:26:33.999345 | orchestrator -> localhost | | . o S. oE | 2026-02-18 02:26:33.999416 | orchestrator -> localhost | | o . .+. . | 2026-02-18 02:26:33.999476 | orchestrator -> localhost | | . +.o.o | 2026-02-18 02:26:33.999532 | orchestrator -> localhost | | ++o++ | 2026-02-18 02:26:33.999593 | orchestrator -> localhost | | .=* .. | 2026-02-18 02:26:33.999654 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-18 02:26:33.999813 | orchestrator -> localhost | ok: Runtime: 0:00:01.364923 2026-02-18 02:26:34.017148 | 2026-02-18 02:26:34.017290 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-18 02:26:34.050357 | orchestrator | ok 2026-02-18 02:26:34.068717 | orchestrator | included: /var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-18 02:26:34.083324 | 2026-02-18 02:26:34.083476 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-18 02:26:34.119459 | orchestrator | skipping: Conditional result was False 2026-02-18 02:26:34.134642 | 2026-02-18 02:26:34.134805 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-18 02:26:34.800912 | orchestrator | changed 2026-02-18 02:26:34.809753 | 2026-02-18 02:26:34.809889 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-18 02:26:35.163509 | orchestrator | ok 2026-02-18 02:26:35.173647 | 2026-02-18 02:26:35.173866 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-18 02:26:35.642576 | orchestrator | ok 2026-02-18 02:26:35.652359 | 2026-02-18 02:26:35.652503 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-18 02:26:36.129780 | orchestrator | ok 2026-02-18 02:26:36.141250 | 2026-02-18 02:26:36.141387 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-18 02:26:36.166620 | orchestrator | skipping: Conditional result was False 2026-02-18 02:26:36.179504 | 2026-02-18 02:26:36.179653 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-18 02:26:36.669328 | orchestrator -> localhost | changed 2026-02-18 02:26:36.701364 | 2026-02-18 02:26:36.701497 | TASK [add-build-sshkey : Add back temp key] 2026-02-18 02:26:37.059234 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/work/dcb102d1513646059c8b4086c535c802_id_rsa (zuul-build-sshkey) 2026-02-18 02:26:37.059531 | orchestrator -> localhost | ok: Runtime: 0:00:00.020928 2026-02-18 02:26:37.067501 | 2026-02-18 02:26:37.067613 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-18 02:26:37.570864 | orchestrator | ok 2026-02-18 02:26:37.580359 | 2026-02-18 02:26:37.580495 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-18 02:26:37.615736 | orchestrator | skipping: Conditional result was False 2026-02-18 02:26:37.677500 | 2026-02-18 02:26:37.677641 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-18 02:26:38.192656 | orchestrator | ok 2026-02-18 02:26:38.210714 | 2026-02-18 02:26:38.210894 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-18 02:26:38.261888 | orchestrator | ok 2026-02-18 02:26:38.275121 | 2026-02-18 02:26:38.275280 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-18 02:26:38.587303 | orchestrator -> localhost | ok 2026-02-18 02:26:38.603860 | 2026-02-18 02:26:38.604050 | TASK [validate-host : Collect information about the host] 2026-02-18 02:26:39.910357 | orchestrator | ok 2026-02-18 02:26:39.929308 | 2026-02-18 02:26:39.929469 | TASK [validate-host : Sanitize hostname] 2026-02-18 02:26:40.005048 | orchestrator | ok 2026-02-18 02:26:40.014564 | 2026-02-18 02:26:40.014701 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-18 02:26:40.598109 | orchestrator -> localhost | changed 2026-02-18 02:26:40.611605 | 2026-02-18 02:26:40.611781 | TASK [validate-host : Collect information about zuul worker] 2026-02-18 02:26:41.137741 | orchestrator | ok 2026-02-18 02:26:41.146630 | 2026-02-18 02:26:41.146816 | TASK [validate-host : Write out all zuul information for each host] 2026-02-18 02:26:41.716428 | orchestrator -> localhost | changed 2026-02-18 02:26:41.737588 | 2026-02-18 02:26:41.737729 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-18 02:26:42.080242 | orchestrator | ok 2026-02-18 02:26:42.090433 | 2026-02-18 02:26:42.090636 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-18 02:27:03.452142 | orchestrator | changed: 2026-02-18 02:27:03.453232 | orchestrator | .d..t...... src/ 2026-02-18 02:27:03.453320 | orchestrator | .d..t...... src/github.com/ 2026-02-18 02:27:03.453358 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-18 02:27:03.453390 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-18 02:27:03.453420 | orchestrator | RedHat.yml 2026-02-18 02:27:03.469883 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-18 02:27:03.469901 | orchestrator | RedHat.yml 2026-02-18 02:27:03.469963 | orchestrator | = 1.53.0"... 2026-02-18 02:27:14.614374 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-18 02:27:14.634134 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-18 02:27:15.135977 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-18 02:27:16.249260 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-18 02:27:16.325763 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-18 02:27:16.960518 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-18 02:27:17.353908 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-18 02:27:17.934900 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-18 02:27:17.935012 | orchestrator | 2026-02-18 02:27:17.935025 | orchestrator | Providers are signed by their developers. 2026-02-18 02:27:17.935034 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-18 02:27:17.935049 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-18 02:27:17.935098 | orchestrator | 2026-02-18 02:27:17.935106 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-18 02:27:17.935113 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-18 02:27:17.935133 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-18 02:27:17.935167 | orchestrator | you run "tofu init" in the future. 2026-02-18 02:27:17.935777 | orchestrator | 2026-02-18 02:27:17.935860 | orchestrator | OpenTofu has been successfully initialized! 2026-02-18 02:27:17.935896 | orchestrator | 2026-02-18 02:27:17.935905 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-18 02:27:17.935913 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-18 02:27:17.935919 | orchestrator | should now work. 2026-02-18 02:27:17.935925 | orchestrator | 2026-02-18 02:27:17.935931 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-18 02:27:17.935938 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-18 02:27:17.935956 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-18 02:27:18.107935 | orchestrator | Created and switched to workspace "ci"! 2026-02-18 02:27:18.108006 | orchestrator | 2026-02-18 02:27:18.108016 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-18 02:27:18.108026 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-18 02:27:18.108033 | orchestrator | for this configuration. 2026-02-18 02:27:18.236463 | orchestrator | ci.auto.tfvars 2026-02-18 02:27:18.240554 | orchestrator | default_custom.tf 2026-02-18 02:27:19.269838 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-18 02:27:19.868252 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-18 02:27:20.178495 | orchestrator | 2026-02-18 02:27:20.178573 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-18 02:27:20.178582 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-18 02:27:20.178587 | orchestrator | + create 2026-02-18 02:27:20.178592 | orchestrator | <= read (data resources) 2026-02-18 02:27:20.178596 | orchestrator | 2026-02-18 02:27:20.178601 | orchestrator | OpenTofu will perform the following actions: 2026-02-18 02:27:20.178614 | orchestrator | 2026-02-18 02:27:20.178619 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-18 02:27:20.178623 | orchestrator | # (config refers to values not yet known) 2026-02-18 02:27:20.178627 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-18 02:27:20.178631 | orchestrator | + checksum = (known after apply) 2026-02-18 02:27:20.178635 | orchestrator | + created_at = (known after apply) 2026-02-18 02:27:20.178640 | orchestrator | + file = (known after apply) 2026-02-18 02:27:20.178644 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.178670 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.178674 | orchestrator | + min_disk_gb = (known after apply) 2026-02-18 02:27:20.178678 | orchestrator | + min_ram_mb = (known after apply) 2026-02-18 02:27:20.178682 | orchestrator | + most_recent = true 2026-02-18 02:27:20.178686 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.178689 | orchestrator | + protected = (known after apply) 2026-02-18 02:27:20.178693 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.178702 | orchestrator | + schema = (known after apply) 2026-02-18 02:27:20.178708 | orchestrator | + size_bytes = (known after apply) 2026-02-18 02:27:20.178714 | orchestrator | + tags = (known after apply) 2026-02-18 02:27:20.178719 | orchestrator | + updated_at = (known after apply) 2026-02-18 02:27:20.178727 | orchestrator | } 2026-02-18 02:27:20.178736 | orchestrator | 2026-02-18 02:27:20.178743 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-18 02:27:20.178749 | orchestrator | # (config refers to values not yet known) 2026-02-18 02:27:20.178755 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-18 02:27:20.178762 | orchestrator | + checksum = (known after apply) 2026-02-18 02:27:20.178767 | orchestrator | + created_at = (known after apply) 2026-02-18 02:27:20.178773 | orchestrator | + file = (known after apply) 2026-02-18 02:27:20.178780 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.178785 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.178791 | orchestrator | + min_disk_gb = (known after apply) 2026-02-18 02:27:20.178796 | orchestrator | + min_ram_mb = (known after apply) 2026-02-18 02:27:20.178801 | orchestrator | + most_recent = true 2026-02-18 02:27:20.178807 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.178813 | orchestrator | + protected = (known after apply) 2026-02-18 02:27:20.178819 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.178825 | orchestrator | + schema = (known after apply) 2026-02-18 02:27:20.178831 | orchestrator | + size_bytes = (known after apply) 2026-02-18 02:27:20.178837 | orchestrator | + tags = (known after apply) 2026-02-18 02:27:20.178843 | orchestrator | + updated_at = (known after apply) 2026-02-18 02:27:20.178849 | orchestrator | } 2026-02-18 02:27:20.178859 | orchestrator | 2026-02-18 02:27:20.178865 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-18 02:27:20.178872 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-18 02:27:20.178878 | orchestrator | + content = (known after apply) 2026-02-18 02:27:20.178884 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-18 02:27:20.178890 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-18 02:27:20.178896 | orchestrator | + content_md5 = (known after apply) 2026-02-18 02:27:20.178903 | orchestrator | + content_sha1 = (known after apply) 2026-02-18 02:27:20.178909 | orchestrator | + content_sha256 = (known after apply) 2026-02-18 02:27:20.178915 | orchestrator | + content_sha512 = (known after apply) 2026-02-18 02:27:20.178921 | orchestrator | + directory_permission = "0777" 2026-02-18 02:27:20.178927 | orchestrator | + file_permission = "0644" 2026-02-18 02:27:20.178932 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-18 02:27:20.178939 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.178945 | orchestrator | } 2026-02-18 02:27:20.178951 | orchestrator | 2026-02-18 02:27:20.178958 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-18 02:27:20.178964 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-18 02:27:20.178971 | orchestrator | + content = (known after apply) 2026-02-18 02:27:20.178978 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-18 02:27:20.178984 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-18 02:27:20.178991 | orchestrator | + content_md5 = (known after apply) 2026-02-18 02:27:20.178997 | orchestrator | + content_sha1 = (known after apply) 2026-02-18 02:27:20.179003 | orchestrator | + content_sha256 = (known after apply) 2026-02-18 02:27:20.179010 | orchestrator | + content_sha512 = (known after apply) 2026-02-18 02:27:20.179014 | orchestrator | + directory_permission = "0777" 2026-02-18 02:27:20.179018 | orchestrator | + file_permission = "0644" 2026-02-18 02:27:20.179028 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-18 02:27:20.179032 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179036 | orchestrator | } 2026-02-18 02:27:20.179042 | orchestrator | 2026-02-18 02:27:20.179057 | orchestrator | # local_file.inventory will be created 2026-02-18 02:27:20.179061 | orchestrator | + resource "local_file" "inventory" { 2026-02-18 02:27:20.179066 | orchestrator | + content = (known after apply) 2026-02-18 02:27:20.179071 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-18 02:27:20.179091 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-18 02:27:20.179097 | orchestrator | + content_md5 = (known after apply) 2026-02-18 02:27:20.179109 | orchestrator | + content_sha1 = (known after apply) 2026-02-18 02:27:20.179115 | orchestrator | + content_sha256 = (known after apply) 2026-02-18 02:27:20.179121 | orchestrator | + content_sha512 = (known after apply) 2026-02-18 02:27:20.179127 | orchestrator | + directory_permission = "0777" 2026-02-18 02:27:20.179133 | orchestrator | + file_permission = "0644" 2026-02-18 02:27:20.179139 | orchestrator | + filename = "inventory.ci" 2026-02-18 02:27:20.179160 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179167 | orchestrator | } 2026-02-18 02:27:20.179171 | orchestrator | 2026-02-18 02:27:20.179175 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-18 02:27:20.179179 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-18 02:27:20.179183 | orchestrator | + content = (sensitive value) 2026-02-18 02:27:20.179186 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-18 02:27:20.179190 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-18 02:27:20.179194 | orchestrator | + content_md5 = (known after apply) 2026-02-18 02:27:20.179197 | orchestrator | + content_sha1 = (known after apply) 2026-02-18 02:27:20.179201 | orchestrator | + content_sha256 = (known after apply) 2026-02-18 02:27:20.179205 | orchestrator | + content_sha512 = (known after apply) 2026-02-18 02:27:20.179209 | orchestrator | + directory_permission = "0700" 2026-02-18 02:27:20.179213 | orchestrator | + file_permission = "0600" 2026-02-18 02:27:20.179216 | orchestrator | + filename = ".id_rsa.ci" 2026-02-18 02:27:20.179220 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179224 | orchestrator | } 2026-02-18 02:27:20.179227 | orchestrator | 2026-02-18 02:27:20.179231 | orchestrator | # null_resource.node_semaphore will be created 2026-02-18 02:27:20.179235 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-18 02:27:20.179239 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179243 | orchestrator | } 2026-02-18 02:27:20.179250 | orchestrator | 2026-02-18 02:27:20.179254 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-18 02:27:20.179258 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-18 02:27:20.179262 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179266 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179270 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179274 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.179277 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179281 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-18 02:27:20.179285 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179289 | orchestrator | + size = 80 2026-02-18 02:27:20.179292 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179296 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179300 | orchestrator | } 2026-02-18 02:27:20.179304 | orchestrator | 2026-02-18 02:27:20.179308 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-18 02:27:20.179311 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-18 02:27:20.179315 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179319 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179323 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179332 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.179336 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179339 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-18 02:27:20.179343 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179347 | orchestrator | + size = 80 2026-02-18 02:27:20.179351 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179357 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179363 | orchestrator | } 2026-02-18 02:27:20.179368 | orchestrator | 2026-02-18 02:27:20.179377 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-18 02:27:20.179384 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-18 02:27:20.179389 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179396 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179402 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179408 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.179414 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179420 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-18 02:27:20.179426 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179432 | orchestrator | + size = 80 2026-02-18 02:27:20.179438 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179444 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179450 | orchestrator | } 2026-02-18 02:27:20.179460 | orchestrator | 2026-02-18 02:27:20.179470 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-18 02:27:20.179476 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-18 02:27:20.179481 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179487 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179493 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179499 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.179505 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179511 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-18 02:27:20.179517 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179524 | orchestrator | + size = 80 2026-02-18 02:27:20.179529 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179533 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179537 | orchestrator | } 2026-02-18 02:27:20.179541 | orchestrator | 2026-02-18 02:27:20.179547 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-18 02:27:20.179553 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-18 02:27:20.179558 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179563 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179568 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179573 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.179578 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179591 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-18 02:27:20.179599 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179605 | orchestrator | + size = 80 2026-02-18 02:27:20.179610 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179615 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179621 | orchestrator | } 2026-02-18 02:27:20.179627 | orchestrator | 2026-02-18 02:27:20.179633 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-18 02:27:20.179639 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-18 02:27:20.179645 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179650 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179656 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179668 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.179674 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179679 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-18 02:27:20.179685 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179690 | orchestrator | + size = 80 2026-02-18 02:27:20.179695 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179701 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179706 | orchestrator | } 2026-02-18 02:27:20.179717 | orchestrator | 2026-02-18 02:27:20.179723 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-18 02:27:20.179730 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-18 02:27:20.179735 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179741 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179747 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179753 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.179759 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179765 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-18 02:27:20.179771 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179777 | orchestrator | + size = 80 2026-02-18 02:27:20.179782 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179788 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179794 | orchestrator | } 2026-02-18 02:27:20.179800 | orchestrator | 2026-02-18 02:27:20.179806 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-18 02:27:20.179812 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.179818 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179823 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179828 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179834 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179839 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-18 02:27:20.179846 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179851 | orchestrator | + size = 20 2026-02-18 02:27:20.179857 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179863 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179868 | orchestrator | } 2026-02-18 02:27:20.179874 | orchestrator | 2026-02-18 02:27:20.179880 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-18 02:27:20.179887 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.179893 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179898 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179904 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179910 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179915 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-18 02:27:20.179920 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.179926 | orchestrator | + size = 20 2026-02-18 02:27:20.179931 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.179937 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.179942 | orchestrator | } 2026-02-18 02:27:20.179948 | orchestrator | 2026-02-18 02:27:20.179954 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-18 02:27:20.179959 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.179964 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.179970 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.179976 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.179981 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.179987 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-18 02:27:20.179993 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180007 | orchestrator | + size = 20 2026-02-18 02:27:20.180013 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.180019 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.180025 | orchestrator | } 2026-02-18 02:27:20.180036 | orchestrator | 2026-02-18 02:27:20.180042 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-18 02:27:20.180048 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.180054 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.180060 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180065 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180071 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.180076 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-18 02:27:20.180082 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180087 | orchestrator | + size = 20 2026-02-18 02:27:20.180093 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.180099 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.180105 | orchestrator | } 2026-02-18 02:27:20.180111 | orchestrator | 2026-02-18 02:27:20.180117 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-18 02:27:20.180123 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.180129 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.180135 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180232 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180246 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.180250 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-18 02:27:20.180254 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180271 | orchestrator | + size = 20 2026-02-18 02:27:20.180275 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.180279 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.180283 | orchestrator | } 2026-02-18 02:27:20.180287 | orchestrator | 2026-02-18 02:27:20.180291 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-18 02:27:20.180295 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.180299 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.180303 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180307 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180311 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.180314 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-18 02:27:20.180318 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180322 | orchestrator | + size = 20 2026-02-18 02:27:20.180326 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.180330 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.180333 | orchestrator | } 2026-02-18 02:27:20.180337 | orchestrator | 2026-02-18 02:27:20.180341 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-18 02:27:20.180345 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.180348 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.180352 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180356 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180360 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.180363 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-18 02:27:20.180367 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180371 | orchestrator | + size = 20 2026-02-18 02:27:20.180374 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.180378 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.180382 | orchestrator | } 2026-02-18 02:27:20.180390 | orchestrator | 2026-02-18 02:27:20.180395 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-18 02:27:20.180399 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.180408 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.180412 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180426 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180430 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.180434 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-18 02:27:20.180438 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180441 | orchestrator | + size = 20 2026-02-18 02:27:20.180445 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.180449 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.180453 | orchestrator | } 2026-02-18 02:27:20.180456 | orchestrator | 2026-02-18 02:27:20.180460 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-18 02:27:20.180464 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-18 02:27:20.180468 | orchestrator | + attachment = (known after apply) 2026-02-18 02:27:20.180471 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180475 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180479 | orchestrator | + metadata = (known after apply) 2026-02-18 02:27:20.180483 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-18 02:27:20.180486 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180492 | orchestrator | + size = 20 2026-02-18 02:27:20.180499 | orchestrator | + volume_retype_policy = "never" 2026-02-18 02:27:20.180504 | orchestrator | + volume_type = "ssd" 2026-02-18 02:27:20.180514 | orchestrator | } 2026-02-18 02:27:20.180525 | orchestrator | 2026-02-18 02:27:20.180530 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-18 02:27:20.180536 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-18 02:27:20.180543 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-18 02:27:20.180548 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-18 02:27:20.180554 | orchestrator | + all_metadata = (known after apply) 2026-02-18 02:27:20.180560 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.180565 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180572 | orchestrator | + config_drive = true 2026-02-18 02:27:20.180578 | orchestrator | + created = (known after apply) 2026-02-18 02:27:20.180584 | orchestrator | + flavor_id = (known after apply) 2026-02-18 02:27:20.180590 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-18 02:27:20.180595 | orchestrator | + force_delete = false 2026-02-18 02:27:20.180601 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-18 02:27:20.180607 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180613 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.180619 | orchestrator | + image_name = (known after apply) 2026-02-18 02:27:20.180625 | orchestrator | + key_pair = "testbed" 2026-02-18 02:27:20.180631 | orchestrator | + name = "testbed-manager" 2026-02-18 02:27:20.180637 | orchestrator | + power_state = "active" 2026-02-18 02:27:20.180643 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180649 | orchestrator | + security_groups = (known after apply) 2026-02-18 02:27:20.180655 | orchestrator | + stop_before_destroy = false 2026-02-18 02:27:20.180661 | orchestrator | + updated = (known after apply) 2026-02-18 02:27:20.180667 | orchestrator | + user_data = (sensitive value) 2026-02-18 02:27:20.180673 | orchestrator | 2026-02-18 02:27:20.180679 | orchestrator | + block_device { 2026-02-18 02:27:20.180684 | orchestrator | + boot_index = 0 2026-02-18 02:27:20.180690 | orchestrator | + delete_on_termination = false 2026-02-18 02:27:20.180701 | orchestrator | + destination_type = "volume" 2026-02-18 02:27:20.180708 | orchestrator | + multiattach = false 2026-02-18 02:27:20.180713 | orchestrator | + source_type = "volume" 2026-02-18 02:27:20.180717 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.180727 | orchestrator | } 2026-02-18 02:27:20.180731 | orchestrator | 2026-02-18 02:27:20.180735 | orchestrator | + network { 2026-02-18 02:27:20.180738 | orchestrator | + access_network = false 2026-02-18 02:27:20.180742 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-18 02:27:20.180746 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-18 02:27:20.180750 | orchestrator | + mac = (known after apply) 2026-02-18 02:27:20.180753 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.180757 | orchestrator | + port = (known after apply) 2026-02-18 02:27:20.180761 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.180765 | orchestrator | } 2026-02-18 02:27:20.180769 | orchestrator | } 2026-02-18 02:27:20.180775 | orchestrator | 2026-02-18 02:27:20.180779 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-18 02:27:20.180782 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-18 02:27:20.180786 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-18 02:27:20.180790 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-18 02:27:20.180794 | orchestrator | + all_metadata = (known after apply) 2026-02-18 02:27:20.180798 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.180801 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.180805 | orchestrator | + config_drive = true 2026-02-18 02:27:20.180809 | orchestrator | + created = (known after apply) 2026-02-18 02:27:20.180812 | orchestrator | + flavor_id = (known after apply) 2026-02-18 02:27:20.180816 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-18 02:27:20.180820 | orchestrator | + force_delete = false 2026-02-18 02:27:20.180824 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-18 02:27:20.180827 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.180831 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.180835 | orchestrator | + image_name = (known after apply) 2026-02-18 02:27:20.180838 | orchestrator | + key_pair = "testbed" 2026-02-18 02:27:20.180842 | orchestrator | + name = "testbed-node-0" 2026-02-18 02:27:20.180846 | orchestrator | + power_state = "active" 2026-02-18 02:27:20.180850 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.180853 | orchestrator | + security_groups = (known after apply) 2026-02-18 02:27:20.180857 | orchestrator | + stop_before_destroy = false 2026-02-18 02:27:20.180861 | orchestrator | + updated = (known after apply) 2026-02-18 02:27:20.180865 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-18 02:27:20.180868 | orchestrator | 2026-02-18 02:27:20.180872 | orchestrator | + block_device { 2026-02-18 02:27:20.180876 | orchestrator | + boot_index = 0 2026-02-18 02:27:20.180880 | orchestrator | + delete_on_termination = false 2026-02-18 02:27:20.180883 | orchestrator | + destination_type = "volume" 2026-02-18 02:27:20.180887 | orchestrator | + multiattach = false 2026-02-18 02:27:20.180891 | orchestrator | + source_type = "volume" 2026-02-18 02:27:20.180895 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.180899 | orchestrator | } 2026-02-18 02:27:20.180905 | orchestrator | 2026-02-18 02:27:20.180911 | orchestrator | + network { 2026-02-18 02:27:20.180916 | orchestrator | + access_network = false 2026-02-18 02:27:20.180926 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-18 02:27:20.180933 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-18 02:27:20.180939 | orchestrator | + mac = (known after apply) 2026-02-18 02:27:20.180944 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.180950 | orchestrator | + port = (known after apply) 2026-02-18 02:27:20.180955 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.180961 | orchestrator | } 2026-02-18 02:27:20.180967 | orchestrator | } 2026-02-18 02:27:20.180976 | orchestrator | 2026-02-18 02:27:20.180982 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-18 02:27:20.180988 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-18 02:27:20.180994 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-18 02:27:20.181007 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-18 02:27:20.181014 | orchestrator | + all_metadata = (known after apply) 2026-02-18 02:27:20.181020 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.181024 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.181028 | orchestrator | + config_drive = true 2026-02-18 02:27:20.181032 | orchestrator | + created = (known after apply) 2026-02-18 02:27:20.181036 | orchestrator | + flavor_id = (known after apply) 2026-02-18 02:27:20.181039 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-18 02:27:20.181043 | orchestrator | + force_delete = false 2026-02-18 02:27:20.181047 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-18 02:27:20.181051 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.181054 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.181058 | orchestrator | + image_name = (known after apply) 2026-02-18 02:27:20.181062 | orchestrator | + key_pair = "testbed" 2026-02-18 02:27:20.181066 | orchestrator | + name = "testbed-node-1" 2026-02-18 02:27:20.181070 | orchestrator | + power_state = "active" 2026-02-18 02:27:20.181073 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.181077 | orchestrator | + security_groups = (known after apply) 2026-02-18 02:27:20.181081 | orchestrator | + stop_before_destroy = false 2026-02-18 02:27:20.181084 | orchestrator | + updated = (known after apply) 2026-02-18 02:27:20.181088 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-18 02:27:20.181092 | orchestrator | 2026-02-18 02:27:20.181096 | orchestrator | + block_device { 2026-02-18 02:27:20.181099 | orchestrator | + boot_index = 0 2026-02-18 02:27:20.181103 | orchestrator | + delete_on_termination = false 2026-02-18 02:27:20.181107 | orchestrator | + destination_type = "volume" 2026-02-18 02:27:20.181110 | orchestrator | + multiattach = false 2026-02-18 02:27:20.181114 | orchestrator | + source_type = "volume" 2026-02-18 02:27:20.181118 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181122 | orchestrator | } 2026-02-18 02:27:20.181126 | orchestrator | 2026-02-18 02:27:20.181129 | orchestrator | + network { 2026-02-18 02:27:20.181133 | orchestrator | + access_network = false 2026-02-18 02:27:20.181137 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-18 02:27:20.181140 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-18 02:27:20.181163 | orchestrator | + mac = (known after apply) 2026-02-18 02:27:20.181170 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.181179 | orchestrator | + port = (known after apply) 2026-02-18 02:27:20.181186 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181192 | orchestrator | } 2026-02-18 02:27:20.181198 | orchestrator | } 2026-02-18 02:27:20.181207 | orchestrator | 2026-02-18 02:27:20.181213 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-18 02:27:20.181218 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-18 02:27:20.181224 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-18 02:27:20.181229 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-18 02:27:20.181237 | orchestrator | + all_metadata = (known after apply) 2026-02-18 02:27:20.181242 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.181255 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.181262 | orchestrator | + config_drive = true 2026-02-18 02:27:20.181268 | orchestrator | + created = (known after apply) 2026-02-18 02:27:20.181274 | orchestrator | + flavor_id = (known after apply) 2026-02-18 02:27:20.181281 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-18 02:27:20.181286 | orchestrator | + force_delete = false 2026-02-18 02:27:20.181292 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-18 02:27:20.181297 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.181302 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.181314 | orchestrator | + image_name = (known after apply) 2026-02-18 02:27:20.181320 | orchestrator | + key_pair = "testbed" 2026-02-18 02:27:20.181326 | orchestrator | + name = "testbed-node-2" 2026-02-18 02:27:20.181332 | orchestrator | + power_state = "active" 2026-02-18 02:27:20.181337 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.181344 | orchestrator | + security_groups = (known after apply) 2026-02-18 02:27:20.181349 | orchestrator | + stop_before_destroy = false 2026-02-18 02:27:20.181356 | orchestrator | + updated = (known after apply) 2026-02-18 02:27:20.181363 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-18 02:27:20.181367 | orchestrator | 2026-02-18 02:27:20.181371 | orchestrator | + block_device { 2026-02-18 02:27:20.181375 | orchestrator | + boot_index = 0 2026-02-18 02:27:20.181378 | orchestrator | + delete_on_termination = false 2026-02-18 02:27:20.181382 | orchestrator | + destination_type = "volume" 2026-02-18 02:27:20.181386 | orchestrator | + multiattach = false 2026-02-18 02:27:20.181389 | orchestrator | + source_type = "volume" 2026-02-18 02:27:20.181393 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181397 | orchestrator | } 2026-02-18 02:27:20.181400 | orchestrator | 2026-02-18 02:27:20.181404 | orchestrator | + network { 2026-02-18 02:27:20.181408 | orchestrator | + access_network = false 2026-02-18 02:27:20.181411 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-18 02:27:20.181415 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-18 02:27:20.181419 | orchestrator | + mac = (known after apply) 2026-02-18 02:27:20.181422 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.181426 | orchestrator | + port = (known after apply) 2026-02-18 02:27:20.181430 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181434 | orchestrator | } 2026-02-18 02:27:20.181437 | orchestrator | } 2026-02-18 02:27:20.181444 | orchestrator | 2026-02-18 02:27:20.181448 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-18 02:27:20.181452 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-18 02:27:20.181455 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-18 02:27:20.181459 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-18 02:27:20.181463 | orchestrator | + all_metadata = (known after apply) 2026-02-18 02:27:20.181466 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.181470 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.181474 | orchestrator | + config_drive = true 2026-02-18 02:27:20.181478 | orchestrator | + created = (known after apply) 2026-02-18 02:27:20.181481 | orchestrator | + flavor_id = (known after apply) 2026-02-18 02:27:20.181485 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-18 02:27:20.181491 | orchestrator | + force_delete = false 2026-02-18 02:27:20.181497 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-18 02:27:20.181502 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.181508 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.181514 | orchestrator | + image_name = (known after apply) 2026-02-18 02:27:20.181520 | orchestrator | + key_pair = "testbed" 2026-02-18 02:27:20.181523 | orchestrator | + name = "testbed-node-3" 2026-02-18 02:27:20.181527 | orchestrator | + power_state = "active" 2026-02-18 02:27:20.181531 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.181535 | orchestrator | + security_groups = (known after apply) 2026-02-18 02:27:20.181538 | orchestrator | + stop_before_destroy = false 2026-02-18 02:27:20.181542 | orchestrator | + updated = (known after apply) 2026-02-18 02:27:20.181546 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-18 02:27:20.181549 | orchestrator | 2026-02-18 02:27:20.181553 | orchestrator | + block_device { 2026-02-18 02:27:20.181562 | orchestrator | + boot_index = 0 2026-02-18 02:27:20.181566 | orchestrator | + delete_on_termination = false 2026-02-18 02:27:20.181570 | orchestrator | + destination_type = "volume" 2026-02-18 02:27:20.181577 | orchestrator | + multiattach = false 2026-02-18 02:27:20.181581 | orchestrator | + source_type = "volume" 2026-02-18 02:27:20.181585 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181589 | orchestrator | } 2026-02-18 02:27:20.181592 | orchestrator | 2026-02-18 02:27:20.181596 | orchestrator | + network { 2026-02-18 02:27:20.181600 | orchestrator | + access_network = false 2026-02-18 02:27:20.181604 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-18 02:27:20.181607 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-18 02:27:20.181611 | orchestrator | + mac = (known after apply) 2026-02-18 02:27:20.181615 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.181619 | orchestrator | + port = (known after apply) 2026-02-18 02:27:20.181624 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181630 | orchestrator | } 2026-02-18 02:27:20.181636 | orchestrator | } 2026-02-18 02:27:20.181644 | orchestrator | 2026-02-18 02:27:20.181654 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-18 02:27:20.181661 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-18 02:27:20.181666 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-18 02:27:20.181672 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-18 02:27:20.181678 | orchestrator | + all_metadata = (known after apply) 2026-02-18 02:27:20.181684 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.181689 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.181696 | orchestrator | + config_drive = true 2026-02-18 02:27:20.181702 | orchestrator | + created = (known after apply) 2026-02-18 02:27:20.181707 | orchestrator | + flavor_id = (known after apply) 2026-02-18 02:27:20.181713 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-18 02:27:20.181719 | orchestrator | + force_delete = false 2026-02-18 02:27:20.181725 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-18 02:27:20.181732 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.181738 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.181744 | orchestrator | + image_name = (known after apply) 2026-02-18 02:27:20.181750 | orchestrator | + key_pair = "testbed" 2026-02-18 02:27:20.181756 | orchestrator | + name = "testbed-node-4" 2026-02-18 02:27:20.181762 | orchestrator | + power_state = "active" 2026-02-18 02:27:20.181769 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.181775 | orchestrator | + security_groups = (known after apply) 2026-02-18 02:27:20.181780 | orchestrator | + stop_before_destroy = false 2026-02-18 02:27:20.181786 | orchestrator | + updated = (known after apply) 2026-02-18 02:27:20.181792 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-18 02:27:20.181798 | orchestrator | 2026-02-18 02:27:20.181804 | orchestrator | + block_device { 2026-02-18 02:27:20.181811 | orchestrator | + boot_index = 0 2026-02-18 02:27:20.181816 | orchestrator | + delete_on_termination = false 2026-02-18 02:27:20.181822 | orchestrator | + destination_type = "volume" 2026-02-18 02:27:20.181829 | orchestrator | + multiattach = false 2026-02-18 02:27:20.181834 | orchestrator | + source_type = "volume" 2026-02-18 02:27:20.181838 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181842 | orchestrator | } 2026-02-18 02:27:20.181846 | orchestrator | 2026-02-18 02:27:20.181850 | orchestrator | + network { 2026-02-18 02:27:20.181854 | orchestrator | + access_network = false 2026-02-18 02:27:20.181858 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-18 02:27:20.181861 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-18 02:27:20.181865 | orchestrator | + mac = (known after apply) 2026-02-18 02:27:20.181869 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.181873 | orchestrator | + port = (known after apply) 2026-02-18 02:27:20.181877 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.181880 | orchestrator | } 2026-02-18 02:27:20.181884 | orchestrator | } 2026-02-18 02:27:20.181895 | orchestrator | 2026-02-18 02:27:20.181900 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-18 02:27:20.181903 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-18 02:27:20.181907 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-18 02:27:20.181911 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-18 02:27:20.181915 | orchestrator | + all_metadata = (known after apply) 2026-02-18 02:27:20.181919 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.181922 | orchestrator | + availability_zone = "nova" 2026-02-18 02:27:20.181926 | orchestrator | + config_drive = true 2026-02-18 02:27:20.181930 | orchestrator | + created = (known after apply) 2026-02-18 02:27:20.181934 | orchestrator | + flavor_id = (known after apply) 2026-02-18 02:27:20.181938 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-18 02:27:20.181941 | orchestrator | + force_delete = false 2026-02-18 02:27:20.181950 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-18 02:27:20.181956 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.181963 | orchestrator | + image_id = (known after apply) 2026-02-18 02:27:20.181971 | orchestrator | + image_name = (known after apply) 2026-02-18 02:27:20.181978 | orchestrator | + key_pair = "testbed" 2026-02-18 02:27:20.181984 | orchestrator | + name = "testbed-node-5" 2026-02-18 02:27:20.181990 | orchestrator | + power_state = "active" 2026-02-18 02:27:20.181996 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182001 | orchestrator | + security_groups = (known after apply) 2026-02-18 02:27:20.182007 | orchestrator | + stop_before_destroy = false 2026-02-18 02:27:20.182057 | orchestrator | + updated = (known after apply) 2026-02-18 02:27:20.182067 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-18 02:27:20.182074 | orchestrator | 2026-02-18 02:27:20.182080 | orchestrator | + block_device { 2026-02-18 02:27:20.182086 | orchestrator | + boot_index = 0 2026-02-18 02:27:20.182091 | orchestrator | + delete_on_termination = false 2026-02-18 02:27:20.182097 | orchestrator | + destination_type = "volume" 2026-02-18 02:27:20.182104 | orchestrator | + multiattach = false 2026-02-18 02:27:20.182108 | orchestrator | + source_type = "volume" 2026-02-18 02:27:20.182112 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.182116 | orchestrator | } 2026-02-18 02:27:20.182120 | orchestrator | 2026-02-18 02:27:20.182123 | orchestrator | + network { 2026-02-18 02:27:20.182127 | orchestrator | + access_network = false 2026-02-18 02:27:20.182131 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-18 02:27:20.182135 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-18 02:27:20.182138 | orchestrator | + mac = (known after apply) 2026-02-18 02:27:20.182166 | orchestrator | + name = (known after apply) 2026-02-18 02:27:20.182171 | orchestrator | + port = (known after apply) 2026-02-18 02:27:20.182175 | orchestrator | + uuid = (known after apply) 2026-02-18 02:27:20.182179 | orchestrator | } 2026-02-18 02:27:20.182182 | orchestrator | } 2026-02-18 02:27:20.182186 | orchestrator | 2026-02-18 02:27:20.182190 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-18 02:27:20.182194 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-18 02:27:20.182198 | orchestrator | + fingerprint = (known after apply) 2026-02-18 02:27:20.182202 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182206 | orchestrator | + name = "testbed" 2026-02-18 02:27:20.182209 | orchestrator | + private_key = (sensitive value) 2026-02-18 02:27:20.182213 | orchestrator | + public_key = (known after apply) 2026-02-18 02:27:20.182217 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182221 | orchestrator | + user_id = (known after apply) 2026-02-18 02:27:20.182224 | orchestrator | } 2026-02-18 02:27:20.182228 | orchestrator | 2026-02-18 02:27:20.182232 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-18 02:27:20.182236 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182246 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182249 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182253 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182257 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182261 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182264 | orchestrator | } 2026-02-18 02:27:20.182272 | orchestrator | 2026-02-18 02:27:20.182276 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-18 02:27:20.182280 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182284 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182290 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182295 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182305 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182312 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182317 | orchestrator | } 2026-02-18 02:27:20.182323 | orchestrator | 2026-02-18 02:27:20.182329 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-18 02:27:20.182335 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182341 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182346 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182352 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182358 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182364 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182370 | orchestrator | } 2026-02-18 02:27:20.182376 | orchestrator | 2026-02-18 02:27:20.182382 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-18 02:27:20.182389 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182395 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182401 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182407 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182413 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182419 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182426 | orchestrator | } 2026-02-18 02:27:20.182432 | orchestrator | 2026-02-18 02:27:20.182438 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-18 02:27:20.182443 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182450 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182455 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182461 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182473 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182480 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182486 | orchestrator | } 2026-02-18 02:27:20.182492 | orchestrator | 2026-02-18 02:27:20.182497 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-18 02:27:20.182503 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182509 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182515 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182522 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182527 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182531 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182535 | orchestrator | } 2026-02-18 02:27:20.182540 | orchestrator | 2026-02-18 02:27:20.182546 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-18 02:27:20.182551 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182558 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182565 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182570 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182576 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182588 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182593 | orchestrator | } 2026-02-18 02:27:20.182599 | orchestrator | 2026-02-18 02:27:20.182604 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-18 02:27:20.182610 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182616 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182621 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182626 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182632 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182638 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182644 | orchestrator | } 2026-02-18 02:27:20.182649 | orchestrator | 2026-02-18 02:27:20.182655 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-18 02:27:20.182663 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-18 02:27:20.182669 | orchestrator | + device = (known after apply) 2026-02-18 02:27:20.182674 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182680 | orchestrator | + instance_id = (known after apply) 2026-02-18 02:27:20.182686 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182692 | orchestrator | + volume_id = (known after apply) 2026-02-18 02:27:20.182697 | orchestrator | } 2026-02-18 02:27:20.182703 | orchestrator | 2026-02-18 02:27:20.182709 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-18 02:27:20.182716 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-18 02:27:20.182722 | orchestrator | + fixed_ip = (known after apply) 2026-02-18 02:27:20.182728 | orchestrator | + floating_ip = (known after apply) 2026-02-18 02:27:20.182734 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182741 | orchestrator | + port_id = (known after apply) 2026-02-18 02:27:20.182747 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182753 | orchestrator | } 2026-02-18 02:27:20.182765 | orchestrator | 2026-02-18 02:27:20.182771 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-18 02:27:20.182777 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-18 02:27:20.182784 | orchestrator | + address = (known after apply) 2026-02-18 02:27:20.182788 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.182793 | orchestrator | + dns_domain = (known after apply) 2026-02-18 02:27:20.182799 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.182804 | orchestrator | + fixed_ip = (known after apply) 2026-02-18 02:27:20.182810 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182815 | orchestrator | + pool = "public" 2026-02-18 02:27:20.182821 | orchestrator | + port_id = (known after apply) 2026-02-18 02:27:20.182826 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182831 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.182837 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.182843 | orchestrator | } 2026-02-18 02:27:20.182848 | orchestrator | 2026-02-18 02:27:20.182854 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-18 02:27:20.182860 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-18 02:27:20.182866 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.182872 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.182878 | orchestrator | + availability_zone_hints = [ 2026-02-18 02:27:20.182884 | orchestrator | + "nova", 2026-02-18 02:27:20.182891 | orchestrator | ] 2026-02-18 02:27:20.182896 | orchestrator | + dns_domain = (known after apply) 2026-02-18 02:27:20.182902 | orchestrator | + external = (known after apply) 2026-02-18 02:27:20.182908 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.182913 | orchestrator | + mtu = (known after apply) 2026-02-18 02:27:20.182919 | orchestrator | + name = "net-testbed-management" 2026-02-18 02:27:20.182924 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.182939 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.182945 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.182951 | orchestrator | + shared = (known after apply) 2026-02-18 02:27:20.182958 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.182964 | orchestrator | + transparent_vlan = (known after apply) 2026-02-18 02:27:20.182969 | orchestrator | 2026-02-18 02:27:20.182975 | orchestrator | + segments (known after apply) 2026-02-18 02:27:20.182981 | orchestrator | } 2026-02-18 02:27:20.182987 | orchestrator | 2026-02-18 02:27:20.182991 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-18 02:27:20.182995 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-18 02:27:20.182999 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.183003 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-18 02:27:20.183007 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-18 02:27:20.183019 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.183023 | orchestrator | + device_id = (known after apply) 2026-02-18 02:27:20.183027 | orchestrator | + device_owner = (known after apply) 2026-02-18 02:27:20.183031 | orchestrator | + dns_assignment = (known after apply) 2026-02-18 02:27:20.183035 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.183038 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.183042 | orchestrator | + mac_address = (known after apply) 2026-02-18 02:27:20.183046 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.183049 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.183053 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.183057 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.183060 | orchestrator | + security_group_ids = (known after apply) 2026-02-18 02:27:20.183064 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.183068 | orchestrator | 2026-02-18 02:27:20.183072 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183075 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-18 02:27:20.183079 | orchestrator | } 2026-02-18 02:27:20.183083 | orchestrator | 2026-02-18 02:27:20.183087 | orchestrator | + binding (known after apply) 2026-02-18 02:27:20.183091 | orchestrator | 2026-02-18 02:27:20.183094 | orchestrator | + fixed_ip { 2026-02-18 02:27:20.183098 | orchestrator | + ip_address = "192.168.16.5" 2026-02-18 02:27:20.183102 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.183106 | orchestrator | } 2026-02-18 02:27:20.183109 | orchestrator | } 2026-02-18 02:27:20.183117 | orchestrator | 2026-02-18 02:27:20.183121 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-18 02:27:20.183125 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-18 02:27:20.183129 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.183133 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-18 02:27:20.183136 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-18 02:27:20.183140 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.183184 | orchestrator | + device_id = (known after apply) 2026-02-18 02:27:20.183189 | orchestrator | + device_owner = (known after apply) 2026-02-18 02:27:20.183192 | orchestrator | + dns_assignment = (known after apply) 2026-02-18 02:27:20.183196 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.183200 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.183203 | orchestrator | + mac_address = (known after apply) 2026-02-18 02:27:20.183207 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.183211 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.183215 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.183218 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.183227 | orchestrator | + security_group_ids = (known after apply) 2026-02-18 02:27:20.183231 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.183235 | orchestrator | 2026-02-18 02:27:20.183239 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183243 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-18 02:27:20.183246 | orchestrator | } 2026-02-18 02:27:20.183250 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183254 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-18 02:27:20.183258 | orchestrator | } 2026-02-18 02:27:20.183262 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183266 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-18 02:27:20.183269 | orchestrator | } 2026-02-18 02:27:20.183273 | orchestrator | 2026-02-18 02:27:20.183277 | orchestrator | + binding (known after apply) 2026-02-18 02:27:20.183281 | orchestrator | 2026-02-18 02:27:20.183284 | orchestrator | + fixed_ip { 2026-02-18 02:27:20.183288 | orchestrator | + ip_address = "192.168.16.10" 2026-02-18 02:27:20.183292 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.183296 | orchestrator | } 2026-02-18 02:27:20.183300 | orchestrator | } 2026-02-18 02:27:20.183303 | orchestrator | 2026-02-18 02:27:20.183307 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-18 02:27:20.183311 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-18 02:27:20.183315 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.183318 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-18 02:27:20.183322 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-18 02:27:20.183326 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.183329 | orchestrator | + device_id = (known after apply) 2026-02-18 02:27:20.183333 | orchestrator | + device_owner = (known after apply) 2026-02-18 02:27:20.183337 | orchestrator | + dns_assignment = (known after apply) 2026-02-18 02:27:20.183341 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.183344 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.183348 | orchestrator | + mac_address = (known after apply) 2026-02-18 02:27:20.183352 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.183356 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.183360 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.183366 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.183372 | orchestrator | + security_group_ids = (known after apply) 2026-02-18 02:27:20.183377 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.183384 | orchestrator | 2026-02-18 02:27:20.183389 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183395 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-18 02:27:20.183401 | orchestrator | } 2026-02-18 02:27:20.183407 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183412 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-18 02:27:20.183418 | orchestrator | } 2026-02-18 02:27:20.183424 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183429 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-18 02:27:20.183435 | orchestrator | } 2026-02-18 02:27:20.183440 | orchestrator | 2026-02-18 02:27:20.183446 | orchestrator | + binding (known after apply) 2026-02-18 02:27:20.183452 | orchestrator | 2026-02-18 02:27:20.183458 | orchestrator | + fixed_ip { 2026-02-18 02:27:20.183465 | orchestrator | + ip_address = "192.168.16.11" 2026-02-18 02:27:20.183471 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.183477 | orchestrator | } 2026-02-18 02:27:20.183483 | orchestrator | } 2026-02-18 02:27:20.183489 | orchestrator | 2026-02-18 02:27:20.183495 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-18 02:27:20.183502 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-18 02:27:20.183509 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.183517 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-18 02:27:20.183523 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-18 02:27:20.183529 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.183541 | orchestrator | + device_id = (known after apply) 2026-02-18 02:27:20.183547 | orchestrator | + device_owner = (known after apply) 2026-02-18 02:27:20.183552 | orchestrator | + dns_assignment = (known after apply) 2026-02-18 02:27:20.183558 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.183570 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.183576 | orchestrator | + mac_address = (known after apply) 2026-02-18 02:27:20.183581 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.183588 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.183594 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.183600 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.183606 | orchestrator | + security_group_ids = (known after apply) 2026-02-18 02:27:20.183612 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.183618 | orchestrator | 2026-02-18 02:27:20.183624 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183630 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-18 02:27:20.183636 | orchestrator | } 2026-02-18 02:27:20.183642 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183649 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-18 02:27:20.183654 | orchestrator | } 2026-02-18 02:27:20.183659 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183663 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-18 02:27:20.183666 | orchestrator | } 2026-02-18 02:27:20.183670 | orchestrator | 2026-02-18 02:27:20.183681 | orchestrator | + binding (known after apply) 2026-02-18 02:27:20.183685 | orchestrator | 2026-02-18 02:27:20.183689 | orchestrator | + fixed_ip { 2026-02-18 02:27:20.183693 | orchestrator | + ip_address = "192.168.16.12" 2026-02-18 02:27:20.183696 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.183700 | orchestrator | } 2026-02-18 02:27:20.183704 | orchestrator | } 2026-02-18 02:27:20.183708 | orchestrator | 2026-02-18 02:27:20.183711 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-18 02:27:20.183715 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-18 02:27:20.183719 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.183723 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-18 02:27:20.183726 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-18 02:27:20.183730 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.183734 | orchestrator | + device_id = (known after apply) 2026-02-18 02:27:20.183738 | orchestrator | + device_owner = (known after apply) 2026-02-18 02:27:20.183741 | orchestrator | + dns_assignment = (known after apply) 2026-02-18 02:27:20.183745 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.183749 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.183752 | orchestrator | + mac_address = (known after apply) 2026-02-18 02:27:20.183756 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.183760 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.183764 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.183768 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.183771 | orchestrator | + security_group_ids = (known after apply) 2026-02-18 02:27:20.183775 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.183779 | orchestrator | 2026-02-18 02:27:20.183783 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183787 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-18 02:27:20.183790 | orchestrator | } 2026-02-18 02:27:20.183794 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183798 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-18 02:27:20.183802 | orchestrator | } 2026-02-18 02:27:20.183805 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183809 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-18 02:27:20.183813 | orchestrator | } 2026-02-18 02:27:20.183816 | orchestrator | 2026-02-18 02:27:20.183825 | orchestrator | + binding (known after apply) 2026-02-18 02:27:20.183829 | orchestrator | 2026-02-18 02:27:20.183832 | orchestrator | + fixed_ip { 2026-02-18 02:27:20.183836 | orchestrator | + ip_address = "192.168.16.13" 2026-02-18 02:27:20.183840 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.183843 | orchestrator | } 2026-02-18 02:27:20.183847 | orchestrator | } 2026-02-18 02:27:20.183851 | orchestrator | 2026-02-18 02:27:20.183855 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-18 02:27:20.183859 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-18 02:27:20.183862 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.183866 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-18 02:27:20.183870 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-18 02:27:20.183874 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.183877 | orchestrator | + device_id = (known after apply) 2026-02-18 02:27:20.183881 | orchestrator | + device_owner = (known after apply) 2026-02-18 02:27:20.183885 | orchestrator | + dns_assignment = (known after apply) 2026-02-18 02:27:20.183889 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.183892 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.183896 | orchestrator | + mac_address = (known after apply) 2026-02-18 02:27:20.183900 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.183904 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.183907 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.183911 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.183915 | orchestrator | + security_group_ids = (known after apply) 2026-02-18 02:27:20.183919 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.183924 | orchestrator | 2026-02-18 02:27:20.183928 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183932 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-18 02:27:20.183935 | orchestrator | } 2026-02-18 02:27:20.183939 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183943 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-18 02:27:20.183947 | orchestrator | } 2026-02-18 02:27:20.183950 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.183954 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-18 02:27:20.183958 | orchestrator | } 2026-02-18 02:27:20.183962 | orchestrator | 2026-02-18 02:27:20.183965 | orchestrator | + binding (known after apply) 2026-02-18 02:27:20.183969 | orchestrator | 2026-02-18 02:27:20.183973 | orchestrator | + fixed_ip { 2026-02-18 02:27:20.183977 | orchestrator | + ip_address = "192.168.16.14" 2026-02-18 02:27:20.183980 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.183984 | orchestrator | } 2026-02-18 02:27:20.183988 | orchestrator | } 2026-02-18 02:27:20.183992 | orchestrator | 2026-02-18 02:27:20.183996 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-18 02:27:20.184000 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-18 02:27:20.184003 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.184007 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-18 02:27:20.184011 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-18 02:27:20.184014 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.184018 | orchestrator | + device_id = (known after apply) 2026-02-18 02:27:20.184022 | orchestrator | + device_owner = (known after apply) 2026-02-18 02:27:20.184026 | orchestrator | + dns_assignment = (known after apply) 2026-02-18 02:27:20.184030 | orchestrator | + dns_name = (known after apply) 2026-02-18 02:27:20.184033 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184037 | orchestrator | + mac_address = (known after apply) 2026-02-18 02:27:20.184041 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.184045 | orchestrator | + port_security_enabled = (known after apply) 2026-02-18 02:27:20.184048 | orchestrator | + qos_policy_id = (known after apply) 2026-02-18 02:27:20.184055 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184059 | orchestrator | + security_group_ids = (known after apply) 2026-02-18 02:27:20.184063 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184067 | orchestrator | 2026-02-18 02:27:20.184070 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.184074 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-18 02:27:20.184078 | orchestrator | } 2026-02-18 02:27:20.184082 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.184089 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-18 02:27:20.184092 | orchestrator | } 2026-02-18 02:27:20.184096 | orchestrator | + allowed_address_pairs { 2026-02-18 02:27:20.184100 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-18 02:27:20.184104 | orchestrator | } 2026-02-18 02:27:20.184108 | orchestrator | 2026-02-18 02:27:20.184115 | orchestrator | + binding (known after apply) 2026-02-18 02:27:20.184119 | orchestrator | 2026-02-18 02:27:20.184123 | orchestrator | + fixed_ip { 2026-02-18 02:27:20.184127 | orchestrator | + ip_address = "192.168.16.15" 2026-02-18 02:27:20.184131 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.184134 | orchestrator | } 2026-02-18 02:27:20.184138 | orchestrator | } 2026-02-18 02:27:20.184160 | orchestrator | 2026-02-18 02:27:20.184164 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-18 02:27:20.184168 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-18 02:27:20.184172 | orchestrator | + force_destroy = false 2026-02-18 02:27:20.184176 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184180 | orchestrator | + port_id = (known after apply) 2026-02-18 02:27:20.184183 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184187 | orchestrator | + router_id = (known after apply) 2026-02-18 02:27:20.184191 | orchestrator | + subnet_id = (known after apply) 2026-02-18 02:27:20.184194 | orchestrator | } 2026-02-18 02:27:20.184198 | orchestrator | 2026-02-18 02:27:20.184203 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-18 02:27:20.184206 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-18 02:27:20.184210 | orchestrator | + admin_state_up = (known after apply) 2026-02-18 02:27:20.184214 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.184218 | orchestrator | + availability_zone_hints = [ 2026-02-18 02:27:20.184221 | orchestrator | + "nova", 2026-02-18 02:27:20.184225 | orchestrator | ] 2026-02-18 02:27:20.184229 | orchestrator | + distributed = (known after apply) 2026-02-18 02:27:20.184233 | orchestrator | + enable_snat = (known after apply) 2026-02-18 02:27:20.184236 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-18 02:27:20.184240 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-18 02:27:20.184244 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184248 | orchestrator | + name = "testbed" 2026-02-18 02:27:20.184251 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184255 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184259 | orchestrator | 2026-02-18 02:27:20.184263 | orchestrator | + external_fixed_ip (known after apply) 2026-02-18 02:27:20.184267 | orchestrator | } 2026-02-18 02:27:20.184270 | orchestrator | 2026-02-18 02:27:20.184274 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-18 02:27:20.184279 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-18 02:27:20.184283 | orchestrator | + description = "ssh" 2026-02-18 02:27:20.184287 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184290 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184294 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184298 | orchestrator | + port_range_max = 22 2026-02-18 02:27:20.184302 | orchestrator | + port_range_min = 22 2026-02-18 02:27:20.184306 | orchestrator | + protocol = "tcp" 2026-02-18 02:27:20.184309 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184318 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184321 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184325 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-18 02:27:20.184329 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184333 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184336 | orchestrator | } 2026-02-18 02:27:20.184340 | orchestrator | 2026-02-18 02:27:20.184344 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-18 02:27:20.184348 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-18 02:27:20.184352 | orchestrator | + description = "wireguard" 2026-02-18 02:27:20.184355 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184359 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184363 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184367 | orchestrator | + port_range_max = 51820 2026-02-18 02:27:20.184370 | orchestrator | + port_range_min = 51820 2026-02-18 02:27:20.184374 | orchestrator | + protocol = "udp" 2026-02-18 02:27:20.184378 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184381 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184385 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184389 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-18 02:27:20.184393 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184396 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184400 | orchestrator | } 2026-02-18 02:27:20.184404 | orchestrator | 2026-02-18 02:27:20.184408 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-18 02:27:20.184411 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-18 02:27:20.184415 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184419 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184423 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184427 | orchestrator | + protocol = "tcp" 2026-02-18 02:27:20.184430 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184434 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184438 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184441 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-18 02:27:20.184445 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184449 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184453 | orchestrator | } 2026-02-18 02:27:20.184456 | orchestrator | 2026-02-18 02:27:20.184460 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-18 02:27:20.184464 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-18 02:27:20.184468 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184471 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184475 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184481 | orchestrator | + protocol = "udp" 2026-02-18 02:27:20.184485 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184489 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184493 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184497 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-18 02:27:20.184500 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184504 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184508 | orchestrator | } 2026-02-18 02:27:20.184511 | orchestrator | 2026-02-18 02:27:20.184515 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-18 02:27:20.184523 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-18 02:27:20.184527 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184531 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184535 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184538 | orchestrator | + protocol = "icmp" 2026-02-18 02:27:20.184542 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184546 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184550 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184556 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-18 02:27:20.184561 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184567 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184577 | orchestrator | } 2026-02-18 02:27:20.184584 | orchestrator | 2026-02-18 02:27:20.184592 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-18 02:27:20.184598 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-18 02:27:20.184603 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184610 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184616 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184622 | orchestrator | + protocol = "tcp" 2026-02-18 02:27:20.184628 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184634 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184644 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184650 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-18 02:27:20.184656 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184662 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184668 | orchestrator | } 2026-02-18 02:27:20.184674 | orchestrator | 2026-02-18 02:27:20.184681 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-18 02:27:20.184686 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-18 02:27:20.184692 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184698 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184705 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184711 | orchestrator | + protocol = "udp" 2026-02-18 02:27:20.184717 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184723 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184730 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184735 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-18 02:27:20.184742 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184747 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184753 | orchestrator | } 2026-02-18 02:27:20.184759 | orchestrator | 2026-02-18 02:27:20.184765 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-18 02:27:20.184771 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-18 02:27:20.184777 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184788 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184792 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184796 | orchestrator | + protocol = "icmp" 2026-02-18 02:27:20.184800 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184804 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184807 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184811 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-18 02:27:20.184815 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184818 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184827 | orchestrator | } 2026-02-18 02:27:20.184830 | orchestrator | 2026-02-18 02:27:20.184834 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-18 02:27:20.184838 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-18 02:27:20.184842 | orchestrator | + description = "vrrp" 2026-02-18 02:27:20.184845 | orchestrator | + direction = "ingress" 2026-02-18 02:27:20.184849 | orchestrator | + ethertype = "IPv4" 2026-02-18 02:27:20.184853 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184857 | orchestrator | + protocol = "112" 2026-02-18 02:27:20.184861 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184867 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-18 02:27:20.184875 | orchestrator | + remote_group_id = (known after apply) 2026-02-18 02:27:20.184883 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-18 02:27:20.184889 | orchestrator | + security_group_id = (known after apply) 2026-02-18 02:27:20.184895 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184901 | orchestrator | } 2026-02-18 02:27:20.184907 | orchestrator | 2026-02-18 02:27:20.184913 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-18 02:27:20.184919 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-18 02:27:20.184925 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.184932 | orchestrator | + description = "management security group" 2026-02-18 02:27:20.184938 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.184943 | orchestrator | + name = "testbed-management" 2026-02-18 02:27:20.184959 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.184966 | orchestrator | + stateful = (known after apply) 2026-02-18 02:27:20.184970 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.184974 | orchestrator | } 2026-02-18 02:27:20.184978 | orchestrator | 2026-02-18 02:27:20.184981 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-18 02:27:20.184985 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-18 02:27:20.184989 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.184993 | orchestrator | + description = "node security group" 2026-02-18 02:27:20.184996 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.185000 | orchestrator | + name = "testbed-node" 2026-02-18 02:27:20.185004 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.185007 | orchestrator | + stateful = (known after apply) 2026-02-18 02:27:20.185011 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.185015 | orchestrator | } 2026-02-18 02:27:20.185018 | orchestrator | 2026-02-18 02:27:20.185022 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-18 02:27:20.185026 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-18 02:27:20.185029 | orchestrator | + all_tags = (known after apply) 2026-02-18 02:27:20.185033 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-18 02:27:20.185037 | orchestrator | + dns_nameservers = [ 2026-02-18 02:27:20.185041 | orchestrator | + "8.8.8.8", 2026-02-18 02:27:20.185044 | orchestrator | + "9.9.9.9", 2026-02-18 02:27:20.185048 | orchestrator | ] 2026-02-18 02:27:20.185052 | orchestrator | + enable_dhcp = true 2026-02-18 02:27:20.185056 | orchestrator | + gateway_ip = (known after apply) 2026-02-18 02:27:20.185060 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.185063 | orchestrator | + ip_version = 4 2026-02-18 02:27:20.185067 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-18 02:27:20.185071 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-18 02:27:20.185075 | orchestrator | + name = "subnet-testbed-management" 2026-02-18 02:27:20.185078 | orchestrator | + network_id = (known after apply) 2026-02-18 02:27:20.185082 | orchestrator | + no_gateway = false 2026-02-18 02:27:20.185086 | orchestrator | + region = (known after apply) 2026-02-18 02:27:20.185089 | orchestrator | + service_types = (known after apply) 2026-02-18 02:27:20.185098 | orchestrator | + tenant_id = (known after apply) 2026-02-18 02:27:20.185104 | orchestrator | 2026-02-18 02:27:20.185110 | orchestrator | + allocation_pool { 2026-02-18 02:27:20.185115 | orchestrator | + end = "192.168.31.250" 2026-02-18 02:27:20.185125 | orchestrator | + start = "192.168.31.200" 2026-02-18 02:27:20.185132 | orchestrator | } 2026-02-18 02:27:20.185137 | orchestrator | } 2026-02-18 02:27:20.185180 | orchestrator | 2026-02-18 02:27:20.185187 | orchestrator | # terraform_data.image will be created 2026-02-18 02:27:20.185193 | orchestrator | + resource "terraform_data" "image" { 2026-02-18 02:27:20.185199 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.185204 | orchestrator | + input = "Ubuntu 24.04" 2026-02-18 02:27:20.185210 | orchestrator | + output = (known after apply) 2026-02-18 02:27:20.185216 | orchestrator | } 2026-02-18 02:27:20.185222 | orchestrator | 2026-02-18 02:27:20.185228 | orchestrator | # terraform_data.image_node will be created 2026-02-18 02:27:20.185234 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-18 02:27:20.185240 | orchestrator | + id = (known after apply) 2026-02-18 02:27:20.185246 | orchestrator | + input = "Ubuntu 24.04" 2026-02-18 02:27:20.185252 | orchestrator | + output = (known after apply) 2026-02-18 02:27:20.185258 | orchestrator | } 2026-02-18 02:27:20.185265 | orchestrator | 2026-02-18 02:27:20.185269 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-18 02:27:20.185273 | orchestrator | 2026-02-18 02:27:20.185277 | orchestrator | Changes to Outputs: 2026-02-18 02:27:20.185281 | orchestrator | + manager_address = (sensitive value) 2026-02-18 02:27:20.185285 | orchestrator | + private_key = (sensitive value) 2026-02-18 02:27:20.438570 | orchestrator | terraform_data.image_node: Creating... 2026-02-18 02:27:20.438630 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=b534b2af-91dd-a698-ac26-4e35f1473b11] 2026-02-18 02:27:20.438681 | orchestrator | terraform_data.image: Creating... 2026-02-18 02:27:20.439442 | orchestrator | terraform_data.image: Creation complete after 0s [id=eaf8c7e3-2730-0321-8cf5-676644762866] 2026-02-18 02:27:20.457227 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-18 02:27:20.457765 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-18 02:27:20.464621 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-18 02:27:20.472943 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-18 02:27:20.473012 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-18 02:27:20.473020 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-18 02:27:20.473027 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-18 02:27:20.473034 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-18 02:27:20.476497 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-18 02:27:20.477452 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-18 02:27:20.944003 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-18 02:27:20.946879 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-18 02:27:20.948704 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-18 02:27:20.950587 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-18 02:27:20.962914 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-18 02:27:20.966074 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-18 02:27:21.511973 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=befebd07-a423-4beb-9183-2481357156a1] 2026-02-18 02:27:21.518893 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-18 02:27:24.082295 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f0ab076a-73e2-49a0-ad75-65c4c5564b19] 2026-02-18 02:27:24.086543 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-18 02:27:24.087541 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=462a6373-b3e1-4411-8b4b-92b19c9bbd9f] 2026-02-18 02:27:24.092644 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-18 02:27:24.127366 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=3f0eb34d-4d19-41b1-a545-9be91ac9c911] 2026-02-18 02:27:24.127637 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=d8cf58e5-ac4f-4786-ab18-80916d08d0f3] 2026-02-18 02:27:24.134097 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=136ad752-18af-4e59-8421-509e0a1d154d] 2026-02-18 02:27:24.134294 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-18 02:27:24.134875 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-18 02:27:24.140111 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-18 02:27:24.152113 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=b30dbb74-62b1-4c30-bd5a-d0d123586322] 2026-02-18 02:27:24.157518 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-18 02:27:24.202207 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=b5427a30-1286-46b2-89f3-e63c343feb5d] 2026-02-18 02:27:24.208748 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=c4d92644-33e6-4467-94f0-587e390b3e2b] 2026-02-18 02:27:24.210489 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-18 02:27:24.214237 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-18 02:27:24.214841 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=51806715bd46989fc3776d0e91fcf34e5a4e4d27] 2026-02-18 02:27:24.217983 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=0606cde6-5f5c-43d5-b7e5-8f6931209fa6] 2026-02-18 02:27:24.219187 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=68bdd23926bbe336470712129147e22c703477ce] 2026-02-18 02:27:24.221456 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-18 02:27:24.877673 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=3960e98d-77d9-4e0f-a638-1ea8be384186] 2026-02-18 02:27:25.518282 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=40f81b0c-a38b-4af8-98bb-ad263119e1bd] 2026-02-18 02:27:25.523120 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-18 02:27:27.472656 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=5e163393-99f4-4b13-b667-4f0af745a039] 2026-02-18 02:27:27.495335 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=7b754618-e661-413d-92c2-ebb9259de61f] 2026-02-18 02:27:27.545095 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=d638dc9f-a3be-40fa-a76f-064f22b3f5a8] 2026-02-18 02:27:27.548215 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=907e2eef-6213-4277-a236-2ae103a400c6] 2026-02-18 02:27:27.558502 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=f33eab1c-67cb-4270-8b47-8509ec50b93a] 2026-02-18 02:27:27.568687 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=ab2d03ed-cd4a-48d1-b8d2-d00a10f41162] 2026-02-18 02:27:28.953321 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=4dd7fd1e-d033-40bd-84f0-145b84698eee] 2026-02-18 02:27:28.957896 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-18 02:27:28.958079 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-18 02:27:28.959568 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-18 02:27:29.140422 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=37b53772-5c1c-4d82-9821-756c83590022] 2026-02-18 02:27:29.150123 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-18 02:27:29.150232 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-18 02:27:29.150433 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-18 02:27:29.151231 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-18 02:27:29.161434 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-18 02:27:29.161488 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-18 02:27:29.163619 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-18 02:27:29.163658 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-18 02:27:29.220114 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=807ca181-438b-4177-94a2-12c110d36a7d] 2026-02-18 02:27:29.229389 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-18 02:27:29.316047 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=12ee8da9-d7f3-4cb9-a889-8a80d78a09cb] 2026-02-18 02:27:29.326914 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-18 02:27:29.610777 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=bb03c19b-d1a1-4619-a6e5-3e9747f0ba89] 2026-02-18 02:27:29.620342 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-18 02:27:29.843048 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=43789de1-c7eb-482e-b0c2-6a9a9ca8ef74] 2026-02-18 02:27:29.846553 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-18 02:27:29.858793 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=2c9b5195-2c55-4a4c-98ee-7f7c48227a0a] 2026-02-18 02:27:29.863346 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=ab821646-d448-4578-ae09-7a258a4c87cb] 2026-02-18 02:27:29.863786 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-18 02:27:29.869027 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=2c30501a-cfca-4ab3-b446-f472e48f2a1c] 2026-02-18 02:27:29.872697 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-18 02:27:29.874544 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-18 02:27:29.913053 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=5733f1ab-da4f-48fc-94fd-b28accc67720] 2026-02-18 02:27:29.917680 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-18 02:27:29.921455 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=9350da0b-78ba-4dc7-ac35-5d081185d7d2] 2026-02-18 02:27:30.028652 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=db4addd0-08e7-460a-b76c-0bbf3ad9ecc7] 2026-02-18 02:27:30.037797 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3c9ebfa7-9c85-4b97-8533-c22d35dd3fa4] 2026-02-18 02:27:30.039497 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=23b168dd-b5de-40e3-928e-1e7b080174f5] 2026-02-18 02:27:30.195863 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=d62fe2a2-72c0-41c7-aa6e-9a990e568581] 2026-02-18 02:27:30.342988 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=d757b8cf-5ffe-44f2-87ee-619f4b7d2c67] 2026-02-18 02:27:30.420171 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=487c7969-637c-403a-809d-5cddb6860d09] 2026-02-18 02:27:30.427734 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=68defc63-2716-4bd4-9fc4-ed87599cf104] 2026-02-18 02:27:30.509631 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6e36d04b-aba8-4cbc-a71e-15223c31eb08] 2026-02-18 02:27:31.115549 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=ca1edae2-baab-4b7d-822a-9116c03d0671] 2026-02-18 02:27:31.129592 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-18 02:27:31.150461 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-18 02:27:31.151644 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-18 02:27:31.158609 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-18 02:27:31.158757 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-18 02:27:31.163327 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-18 02:27:31.170650 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-18 02:27:32.913190 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=df64204e-ed58-467e-9afd-0680b3980b28] 2026-02-18 02:27:32.925383 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-18 02:27:32.926073 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-18 02:27:32.926184 | orchestrator | local_file.inventory: Creating... 2026-02-18 02:27:32.933255 | orchestrator | local_file.inventory: Creation complete after 0s [id=6ae1873cd2b4402ce3a15bf7c4d6f3c626012eb5] 2026-02-18 02:27:32.934900 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fb811571449d967db2f53c254ff13f10bc45a754] 2026-02-18 02:27:33.635126 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=df64204e-ed58-467e-9afd-0680b3980b28] 2026-02-18 02:27:41.151057 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-18 02:27:41.156458 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-18 02:27:41.160889 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-18 02:27:41.160979 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-18 02:27:41.165458 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-18 02:27:41.172862 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-18 02:27:51.152261 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-18 02:27:51.157603 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-18 02:27:51.161999 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-18 02:27:51.162137 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-18 02:27:51.166456 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-18 02:27:51.173765 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-18 02:27:51.522974 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=ea40a229-0356-4d99-91be-a2d6bd00b561] 2026-02-18 02:27:51.623054 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=c8c2ead3-a80f-46ad-8fbc-41428c7cc6fb] 2026-02-18 02:27:51.760312 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=5ff87b2a-a58f-43a5-b09c-b6c9eac68621] 2026-02-18 02:27:52.178647 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=0cd7a69c-1c23-4812-8886-5566063dc758] 2026-02-18 02:28:01.165940 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-18 02:28:01.166059 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-18 02:28:01.888919 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=7a475508-e06a-420e-a87e-7bd4a570551f] 2026-02-18 02:28:01.889711 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=9b3d5295-bfbc-4081-86a3-9cd835870660] 2026-02-18 02:28:01.905840 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-18 02:28:01.917649 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7847150097477539357] 2026-02-18 02:28:01.924122 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-18 02:28:01.925890 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-18 02:28:01.930060 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-18 02:28:01.932782 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-18 02:28:01.932848 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-18 02:28:01.934530 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-18 02:28:01.940456 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-18 02:28:01.940519 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-18 02:28:01.950983 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-18 02:28:01.965594 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-18 02:28:05.338630 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=c8c2ead3-a80f-46ad-8fbc-41428c7cc6fb/b5427a30-1286-46b2-89f3-e63c343feb5d] 2026-02-18 02:28:05.348364 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=ea40a229-0356-4d99-91be-a2d6bd00b561/c4d92644-33e6-4467-94f0-587e390b3e2b] 2026-02-18 02:28:05.369240 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=9b3d5295-bfbc-4081-86a3-9cd835870660/3f0eb34d-4d19-41b1-a545-9be91ac9c911] 2026-02-18 02:28:05.381454 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=c8c2ead3-a80f-46ad-8fbc-41428c7cc6fb/136ad752-18af-4e59-8421-509e0a1d154d] 2026-02-18 02:28:05.398097 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=ea40a229-0356-4d99-91be-a2d6bd00b561/f0ab076a-73e2-49a0-ad75-65c4c5564b19] 2026-02-18 02:28:05.408991 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=9b3d5295-bfbc-4081-86a3-9cd835870660/0606cde6-5f5c-43d5-b7e5-8f6931209fa6] 2026-02-18 02:28:11.472666 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=c8c2ead3-a80f-46ad-8fbc-41428c7cc6fb/b30dbb74-62b1-4c30-bd5a-d0d123586322] 2026-02-18 02:28:11.485142 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=ea40a229-0356-4d99-91be-a2d6bd00b561/d8cf58e5-ac4f-4786-ab18-80916d08d0f3] 2026-02-18 02:28:11.498213 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=9b3d5295-bfbc-4081-86a3-9cd835870660/462a6373-b3e1-4411-8b4b-92b19c9bbd9f] 2026-02-18 02:28:11.966495 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-18 02:28:21.967686 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-18 02:28:22.324504 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=ef65c984-6099-4455-9b93-4f8dca1e2a47] 2026-02-18 02:28:22.344892 | orchestrator | 2026-02-18 02:28:22.344984 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-18 02:28:22.344997 | orchestrator | 2026-02-18 02:28:22.345005 | orchestrator | Outputs: 2026-02-18 02:28:22.345012 | orchestrator | 2026-02-18 02:28:22.345019 | orchestrator | manager_address = 2026-02-18 02:28:22.345027 | orchestrator | private_key = 2026-02-18 02:28:22.432915 | orchestrator | ok: Runtime: 0:01:08.021296 2026-02-18 02:28:22.456886 | 2026-02-18 02:28:22.457072 | TASK [Fetch manager address] 2026-02-18 02:28:22.932785 | orchestrator | ok 2026-02-18 02:28:22.943295 | 2026-02-18 02:28:22.943426 | TASK [Set manager_host address] 2026-02-18 02:28:23.026098 | orchestrator | ok 2026-02-18 02:28:23.039350 | 2026-02-18 02:28:23.039506 | LOOP [Update ansible collections] 2026-02-18 02:28:24.178216 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-18 02:28:24.178572 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-18 02:28:24.178629 | orchestrator | Starting galaxy collection install process 2026-02-18 02:28:24.178670 | orchestrator | Process install dependency map 2026-02-18 02:28:24.178712 | orchestrator | Starting collection install process 2026-02-18 02:28:24.178748 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-18 02:28:24.178784 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-18 02:28:24.178824 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-18 02:28:24.178931 | orchestrator | ok: Item: commons Runtime: 0:00:00.789241 2026-02-18 02:28:25.128777 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-18 02:28:25.129054 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-18 02:28:25.129134 | orchestrator | Starting galaxy collection install process 2026-02-18 02:28:25.129193 | orchestrator | Process install dependency map 2026-02-18 02:28:25.129248 | orchestrator | Starting collection install process 2026-02-18 02:28:25.129299 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-18 02:28:25.129350 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-18 02:28:25.129397 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-18 02:28:25.129473 | orchestrator | ok: Item: services Runtime: 0:00:00.663663 2026-02-18 02:28:25.149143 | 2026-02-18 02:28:25.149394 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-18 02:28:35.726779 | orchestrator | ok 2026-02-18 02:28:35.737114 | 2026-02-18 02:28:35.737230 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-18 02:29:35.787681 | orchestrator | ok 2026-02-18 02:29:35.797937 | 2026-02-18 02:29:35.798099 | TASK [Fetch manager ssh hostkey] 2026-02-18 02:29:37.370122 | orchestrator | Output suppressed because no_log was given 2026-02-18 02:29:37.388912 | 2026-02-18 02:29:37.389164 | TASK [Get ssh keypair from terraform environment] 2026-02-18 02:29:37.929586 | orchestrator | ok: Runtime: 0:00:00.007763 2026-02-18 02:29:37.946963 | 2026-02-18 02:29:37.947167 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-18 02:29:37.997221 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-18 02:29:38.007201 | 2026-02-18 02:29:38.007330 | TASK [Run manager part 0] 2026-02-18 02:29:39.000694 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-18 02:29:39.060114 | orchestrator | 2026-02-18 02:29:39.060175 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-18 02:29:39.060185 | orchestrator | 2026-02-18 02:29:39.060199 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-18 02:29:40.995840 | orchestrator | ok: [testbed-manager] 2026-02-18 02:29:40.995905 | orchestrator | 2026-02-18 02:29:40.995928 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-18 02:29:40.995939 | orchestrator | 2026-02-18 02:29:40.995948 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:29:42.911277 | orchestrator | ok: [testbed-manager] 2026-02-18 02:29:42.911439 | orchestrator | 2026-02-18 02:29:42.911458 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-18 02:29:43.597343 | orchestrator | ok: [testbed-manager] 2026-02-18 02:29:43.597404 | orchestrator | 2026-02-18 02:29:43.597414 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-18 02:29:43.650334 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:29:43.650404 | orchestrator | 2026-02-18 02:29:43.650418 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-18 02:29:43.681971 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:29:43.682048 | orchestrator | 2026-02-18 02:29:43.682058 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-18 02:29:43.718185 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:29:43.718280 | orchestrator | 2026-02-18 02:29:43.718292 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-18 02:29:43.750035 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:29:43.750113 | orchestrator | 2026-02-18 02:29:43.750126 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-18 02:29:43.786773 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:29:43.786843 | orchestrator | 2026-02-18 02:29:43.786854 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-18 02:29:43.819692 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:29:43.819742 | orchestrator | 2026-02-18 02:29:43.819754 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-18 02:29:43.850570 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:29:43.850613 | orchestrator | 2026-02-18 02:29:43.850623 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-18 02:29:44.595739 | orchestrator | changed: [testbed-manager] 2026-02-18 02:29:44.595812 | orchestrator | 2026-02-18 02:29:44.595821 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-18 02:32:43.509970 | orchestrator | changed: [testbed-manager] 2026-02-18 02:32:43.510099 | orchestrator | 2026-02-18 02:32:43.510117 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-18 02:34:06.034838 | orchestrator | changed: [testbed-manager] 2026-02-18 02:34:06.034919 | orchestrator | 2026-02-18 02:34:06.034930 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-18 02:34:31.823136 | orchestrator | changed: [testbed-manager] 2026-02-18 02:34:31.823260 | orchestrator | 2026-02-18 02:34:31.823291 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-18 02:34:42.330624 | orchestrator | changed: [testbed-manager] 2026-02-18 02:34:42.330675 | orchestrator | 2026-02-18 02:34:42.330683 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-18 02:34:42.378231 | orchestrator | ok: [testbed-manager] 2026-02-18 02:34:42.378319 | orchestrator | 2026-02-18 02:34:42.378339 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-18 02:34:43.232569 | orchestrator | ok: [testbed-manager] 2026-02-18 02:34:43.232687 | orchestrator | 2026-02-18 02:34:43.232718 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-18 02:34:44.006787 | orchestrator | changed: [testbed-manager] 2026-02-18 02:34:44.006870 | orchestrator | 2026-02-18 02:34:44.006883 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-18 02:34:51.122976 | orchestrator | changed: [testbed-manager] 2026-02-18 02:34:51.123013 | orchestrator | 2026-02-18 02:34:51.123031 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-18 02:34:57.899882 | orchestrator | changed: [testbed-manager] 2026-02-18 02:34:57.899973 | orchestrator | 2026-02-18 02:34:57.899991 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-18 02:35:00.886138 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:00.886223 | orchestrator | 2026-02-18 02:35:00.886236 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-18 02:35:02.930756 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:02.930798 | orchestrator | 2026-02-18 02:35:02.930806 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-18 02:35:04.145712 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-18 02:35:04.145812 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-18 02:35:04.145828 | orchestrator | 2026-02-18 02:35:04.145841 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-18 02:35:04.191214 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-18 02:35:04.191265 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-18 02:35:04.191270 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-18 02:35:04.191275 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-18 02:35:08.512640 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-18 02:35:08.512746 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-18 02:35:08.512767 | orchestrator | 2026-02-18 02:35:08.512784 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-18 02:35:09.139788 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:09.139887 | orchestrator | 2026-02-18 02:35:09.139903 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-18 02:35:30.811714 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-18 02:35:30.811823 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-18 02:35:30.811847 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-18 02:35:30.811866 | orchestrator | 2026-02-18 02:35:30.811884 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-18 02:35:33.387474 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-18 02:35:33.387575 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-18 02:35:33.387588 | orchestrator | 2026-02-18 02:35:33.387598 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-18 02:35:33.387607 | orchestrator | 2026-02-18 02:35:33.387615 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:35:34.978869 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:34.978954 | orchestrator | 2026-02-18 02:35:34.978968 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-18 02:35:35.023731 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:35.023850 | orchestrator | 2026-02-18 02:35:35.023875 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-18 02:35:35.088767 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:35.088864 | orchestrator | 2026-02-18 02:35:35.088888 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-18 02:35:35.908562 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:35.908650 | orchestrator | 2026-02-18 02:35:35.908665 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-18 02:35:36.700226 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:36.700265 | orchestrator | 2026-02-18 02:35:36.700272 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-18 02:35:38.170962 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-18 02:35:38.170998 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-18 02:35:38.171003 | orchestrator | 2026-02-18 02:35:38.171015 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-18 02:35:39.650855 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:39.650911 | orchestrator | 2026-02-18 02:35:39.650921 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-18 02:35:41.510751 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-18 02:35:41.510866 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-18 02:35:41.510892 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-18 02:35:41.510912 | orchestrator | 2026-02-18 02:35:41.510933 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-18 02:35:41.571123 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:41.571220 | orchestrator | 2026-02-18 02:35:41.571237 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-18 02:35:41.646598 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:41.646673 | orchestrator | 2026-02-18 02:35:41.646684 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-18 02:35:42.257049 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:42.257104 | orchestrator | 2026-02-18 02:35:42.257115 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-18 02:35:42.321622 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:42.321662 | orchestrator | 2026-02-18 02:35:42.321668 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-18 02:35:43.255242 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-18 02:35:43.255286 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:43.255295 | orchestrator | 2026-02-18 02:35:43.255303 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-18 02:35:43.288362 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:43.288401 | orchestrator | 2026-02-18 02:35:43.288409 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-18 02:35:43.322763 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:43.322818 | orchestrator | 2026-02-18 02:35:43.322828 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-18 02:35:43.349237 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:43.349272 | orchestrator | 2026-02-18 02:35:43.349278 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-18 02:35:43.423866 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:43.423919 | orchestrator | 2026-02-18 02:35:43.423932 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-18 02:35:44.173590 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:44.173686 | orchestrator | 2026-02-18 02:35:44.173702 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-18 02:35:44.173715 | orchestrator | 2026-02-18 02:35:44.173727 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:35:45.626057 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:45.626093 | orchestrator | 2026-02-18 02:35:45.626099 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-18 02:35:46.698333 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:46.698423 | orchestrator | 2026-02-18 02:35:46.698439 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:35:46.698453 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-18 02:35:46.698464 | orchestrator | 2026-02-18 02:35:47.260628 | orchestrator | ok: Runtime: 0:06:08.507266 2026-02-18 02:35:47.277514 | 2026-02-18 02:35:47.277660 | TASK [Point out that the log in on the manager is now possible] 2026-02-18 02:35:47.326973 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-18 02:35:47.337687 | 2026-02-18 02:35:47.337809 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-18 02:35:47.386298 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-18 02:35:47.397132 | 2026-02-18 02:35:47.397277 | TASK [Run manager part 1 + 2] 2026-02-18 02:35:48.303401 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-18 02:35:48.364915 | orchestrator | 2026-02-18 02:35:48.364965 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-18 02:35:48.364972 | orchestrator | 2026-02-18 02:35:48.364985 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:35:51.124641 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:51.124750 | orchestrator | 2026-02-18 02:35:51.124773 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-18 02:35:51.168193 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:51.168246 | orchestrator | 2026-02-18 02:35:51.168256 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-18 02:35:51.213294 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:51.213352 | orchestrator | 2026-02-18 02:35:51.213366 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-18 02:35:51.249352 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:51.249397 | orchestrator | 2026-02-18 02:35:51.249405 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-18 02:35:51.318317 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:51.318368 | orchestrator | 2026-02-18 02:35:51.318376 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-18 02:35:51.397075 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:51.397153 | orchestrator | 2026-02-18 02:35:51.397169 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-18 02:35:51.444428 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-18 02:35:51.444479 | orchestrator | 2026-02-18 02:35:51.444486 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-18 02:35:52.224258 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:52.224375 | orchestrator | 2026-02-18 02:35:52.224403 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-18 02:35:52.272263 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:35:52.272322 | orchestrator | 2026-02-18 02:35:52.272329 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-18 02:35:53.737674 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:53.738426 | orchestrator | 2026-02-18 02:35:53.738463 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-18 02:35:54.333317 | orchestrator | ok: [testbed-manager] 2026-02-18 02:35:54.333391 | orchestrator | 2026-02-18 02:35:54.333401 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-18 02:35:55.530767 | orchestrator | changed: [testbed-manager] 2026-02-18 02:35:55.530834 | orchestrator | 2026-02-18 02:35:55.530847 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-18 02:36:12.345318 | orchestrator | changed: [testbed-manager] 2026-02-18 02:36:12.345429 | orchestrator | 2026-02-18 02:36:12.345446 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-18 02:36:13.027438 | orchestrator | ok: [testbed-manager] 2026-02-18 02:36:13.027500 | orchestrator | 2026-02-18 02:36:13.027512 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-18 02:36:13.082729 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:36:13.082797 | orchestrator | 2026-02-18 02:36:13.082810 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-18 02:36:14.097050 | orchestrator | changed: [testbed-manager] 2026-02-18 02:36:14.097128 | orchestrator | 2026-02-18 02:36:14.097149 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-18 02:36:15.085739 | orchestrator | changed: [testbed-manager] 2026-02-18 02:36:15.085790 | orchestrator | 2026-02-18 02:36:15.085800 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-18 02:36:15.708090 | orchestrator | changed: [testbed-manager] 2026-02-18 02:36:15.708163 | orchestrator | 2026-02-18 02:36:15.708174 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-18 02:36:15.752743 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-18 02:36:15.752855 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-18 02:36:15.752868 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-18 02:36:15.752878 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-18 02:36:18.376616 | orchestrator | changed: [testbed-manager] 2026-02-18 02:36:18.376682 | orchestrator | 2026-02-18 02:36:18.376693 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-18 02:36:28.474189 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-18 02:36:28.474296 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-18 02:36:28.474314 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-18 02:36:28.474326 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-18 02:36:28.474349 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-18 02:36:28.474361 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-18 02:36:28.474373 | orchestrator | 2026-02-18 02:36:28.474386 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-18 02:36:29.605788 | orchestrator | changed: [testbed-manager] 2026-02-18 02:36:29.605899 | orchestrator | 2026-02-18 02:36:29.605919 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-18 02:36:29.652355 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:36:29.652437 | orchestrator | 2026-02-18 02:36:29.652453 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-18 02:36:33.020936 | orchestrator | changed: [testbed-manager] 2026-02-18 02:36:33.020988 | orchestrator | 2026-02-18 02:36:33.020994 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-18 02:36:33.053247 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:36:33.053331 | orchestrator | 2026-02-18 02:36:33.053341 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-18 02:38:26.463050 | orchestrator | changed: [testbed-manager] 2026-02-18 02:38:26.463090 | orchestrator | 2026-02-18 02:38:26.463096 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-18 02:38:27.772244 | orchestrator | ok: [testbed-manager] 2026-02-18 02:38:27.772316 | orchestrator | 2026-02-18 02:38:27.772335 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:38:27.772375 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-18 02:38:27.772389 | orchestrator | 2026-02-18 02:38:28.043701 | orchestrator | ok: Runtime: 0:02:40.194340 2026-02-18 02:38:28.062685 | 2026-02-18 02:38:28.062961 | TASK [Reboot manager] 2026-02-18 02:38:29.601411 | orchestrator | ok: Runtime: 0:00:01.066461 2026-02-18 02:38:29.619322 | 2026-02-18 02:38:29.619494 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-18 02:38:46.054944 | orchestrator | ok 2026-02-18 02:38:46.065905 | 2026-02-18 02:38:46.066043 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-18 02:39:46.112425 | orchestrator | ok 2026-02-18 02:39:46.122515 | 2026-02-18 02:39:46.122642 | TASK [Deploy manager + bootstrap nodes] 2026-02-18 02:39:48.858645 | orchestrator | 2026-02-18 02:39:48.858906 | orchestrator | # DEPLOY MANAGER 2026-02-18 02:39:48.858932 | orchestrator | 2026-02-18 02:39:48.858946 | orchestrator | + set -e 2026-02-18 02:39:48.858960 | orchestrator | + echo 2026-02-18 02:39:48.858973 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-18 02:39:48.858990 | orchestrator | + echo 2026-02-18 02:39:48.859039 | orchestrator | + cat /opt/manager-vars.sh 2026-02-18 02:39:48.862333 | orchestrator | export NUMBER_OF_NODES=6 2026-02-18 02:39:48.862413 | orchestrator | 2026-02-18 02:39:48.862425 | orchestrator | export CEPH_VERSION=reef 2026-02-18 02:39:48.862437 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-18 02:39:48.862447 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-18 02:39:48.862471 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-18 02:39:48.862480 | orchestrator | 2026-02-18 02:39:48.862494 | orchestrator | export ARA=false 2026-02-18 02:39:48.862502 | orchestrator | export DEPLOY_MODE=manager 2026-02-18 02:39:48.862515 | orchestrator | export TEMPEST=false 2026-02-18 02:39:48.862524 | orchestrator | export IS_ZUUL=true 2026-02-18 02:39:48.862532 | orchestrator | 2026-02-18 02:39:48.862545 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 02:39:48.862554 | orchestrator | export EXTERNAL_API=false 2026-02-18 02:39:48.862562 | orchestrator | 2026-02-18 02:39:48.862570 | orchestrator | export IMAGE_USER=ubuntu 2026-02-18 02:39:48.862582 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-18 02:39:48.862590 | orchestrator | 2026-02-18 02:39:48.862598 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-18 02:39:48.862615 | orchestrator | 2026-02-18 02:39:48.862624 | orchestrator | + echo 2026-02-18 02:39:48.862633 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 02:39:48.863751 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 02:39:48.863766 | orchestrator | ++ INTERACTIVE=false 2026-02-18 02:39:48.863776 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 02:39:48.863786 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 02:39:48.864088 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 02:39:48.864170 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 02:39:48.864187 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 02:39:48.864198 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 02:39:48.864209 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 02:39:48.864220 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 02:39:48.864233 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 02:39:48.864244 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 02:39:48.864255 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 02:39:48.864266 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 02:39:48.864315 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 02:39:48.864328 | orchestrator | ++ export ARA=false 2026-02-18 02:39:48.864340 | orchestrator | ++ ARA=false 2026-02-18 02:39:48.864351 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 02:39:48.864362 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 02:39:48.864373 | orchestrator | ++ export TEMPEST=false 2026-02-18 02:39:48.864385 | orchestrator | ++ TEMPEST=false 2026-02-18 02:39:48.864407 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 02:39:48.864419 | orchestrator | ++ IS_ZUUL=true 2026-02-18 02:39:48.864430 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 02:39:48.864442 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 02:39:48.864453 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 02:39:48.864464 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 02:39:48.864475 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 02:39:48.864486 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 02:39:48.864497 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 02:39:48.864508 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 02:39:48.864519 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 02:39:48.864530 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 02:39:48.864541 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-18 02:39:48.928844 | orchestrator | + docker version 2026-02-18 02:39:49.048299 | orchestrator | Client: Docker Engine - Community 2026-02-18 02:39:49.048414 | orchestrator | Version: 27.5.1 2026-02-18 02:39:49.048432 | orchestrator | API version: 1.47 2026-02-18 02:39:49.048445 | orchestrator | Go version: go1.22.11 2026-02-18 02:39:49.048455 | orchestrator | Git commit: 9f9e405 2026-02-18 02:39:49.048467 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-18 02:39:49.048479 | orchestrator | OS/Arch: linux/amd64 2026-02-18 02:39:49.048490 | orchestrator | Context: default 2026-02-18 02:39:49.048501 | orchestrator | 2026-02-18 02:39:49.048512 | orchestrator | Server: Docker Engine - Community 2026-02-18 02:39:49.048523 | orchestrator | Engine: 2026-02-18 02:39:49.048535 | orchestrator | Version: 27.5.1 2026-02-18 02:39:49.048546 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-18 02:39:49.048586 | orchestrator | Go version: go1.22.11 2026-02-18 02:39:49.048598 | orchestrator | Git commit: 4c9b3b0 2026-02-18 02:39:49.048609 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-18 02:39:49.048620 | orchestrator | OS/Arch: linux/amd64 2026-02-18 02:39:49.048631 | orchestrator | Experimental: false 2026-02-18 02:39:49.048642 | orchestrator | containerd: 2026-02-18 02:39:49.048654 | orchestrator | Version: v2.2.1 2026-02-18 02:39:49.048665 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-18 02:39:49.048677 | orchestrator | runc: 2026-02-18 02:39:49.048713 | orchestrator | Version: 1.3.4 2026-02-18 02:39:49.048725 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-18 02:39:49.048736 | orchestrator | docker-init: 2026-02-18 02:39:49.048747 | orchestrator | Version: 0.19.0 2026-02-18 02:39:49.048759 | orchestrator | GitCommit: de40ad0 2026-02-18 02:39:49.051391 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-18 02:39:49.061997 | orchestrator | + set -e 2026-02-18 02:39:49.062136 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 02:39:49.062152 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 02:39:49.062162 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 02:39:49.062172 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 02:39:49.062182 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 02:39:49.062192 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 02:39:49.062203 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 02:39:49.062213 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 02:39:49.062222 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 02:39:49.062233 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 02:39:49.062244 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 02:39:49.062255 | orchestrator | ++ export ARA=false 2026-02-18 02:39:49.062267 | orchestrator | ++ ARA=false 2026-02-18 02:39:49.062278 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 02:39:49.062289 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 02:39:49.062300 | orchestrator | ++ export TEMPEST=false 2026-02-18 02:39:49.062310 | orchestrator | ++ TEMPEST=false 2026-02-18 02:39:49.062321 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 02:39:49.062332 | orchestrator | ++ IS_ZUUL=true 2026-02-18 02:39:49.062342 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 02:39:49.062354 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 02:39:49.062368 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 02:39:49.062388 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 02:39:49.062406 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 02:39:49.062426 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 02:39:49.062446 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 02:39:49.062466 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 02:39:49.062484 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 02:39:49.062499 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 02:39:49.062510 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 02:39:49.062521 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 02:39:49.062532 | orchestrator | ++ INTERACTIVE=false 2026-02-18 02:39:49.062543 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 02:39:49.062558 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 02:39:49.062569 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-18 02:39:49.062580 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-18 02:39:49.068331 | orchestrator | + set -e 2026-02-18 02:39:49.068409 | orchestrator | + VERSION=9.5.0 2026-02-18 02:39:49.068426 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-18 02:39:49.078107 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-18 02:39:49.078172 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-18 02:39:49.081846 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-18 02:39:49.085775 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-18 02:39:49.092274 | orchestrator | /opt/configuration ~ 2026-02-18 02:39:49.092364 | orchestrator | + set -e 2026-02-18 02:39:49.092386 | orchestrator | + pushd /opt/configuration 2026-02-18 02:39:49.092404 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 02:39:49.094184 | orchestrator | + source /opt/venv/bin/activate 2026-02-18 02:39:49.095403 | orchestrator | ++ deactivate nondestructive 2026-02-18 02:39:49.095507 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:49.095529 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:49.095577 | orchestrator | ++ hash -r 2026-02-18 02:39:49.095596 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:49.095612 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-18 02:39:49.095628 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-18 02:39:49.095645 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-18 02:39:49.095662 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-18 02:39:49.095679 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-18 02:39:49.095721 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-18 02:39:49.095738 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-18 02:39:49.095755 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 02:39:49.095773 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 02:39:49.095790 | orchestrator | ++ export PATH 2026-02-18 02:39:49.095819 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:49.095837 | orchestrator | ++ '[' -z '' ']' 2026-02-18 02:39:49.095848 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-18 02:39:49.095857 | orchestrator | ++ PS1='(venv) ' 2026-02-18 02:39:49.095867 | orchestrator | ++ export PS1 2026-02-18 02:39:49.095877 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-18 02:39:49.095886 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-18 02:39:49.095897 | orchestrator | ++ hash -r 2026-02-18 02:39:49.095914 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-18 02:39:50.471387 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-18 02:39:50.471857 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-18 02:39:50.473589 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-18 02:39:50.474670 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-18 02:39:50.475799 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-18 02:39:50.486112 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-18 02:39:50.487783 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-18 02:39:50.488805 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-18 02:39:50.490470 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-18 02:39:50.525085 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-18 02:39:50.526404 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-18 02:39:50.528294 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-18 02:39:50.529434 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-18 02:39:50.533605 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-18 02:39:50.787494 | orchestrator | ++ which gilt 2026-02-18 02:39:50.790832 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-18 02:39:50.790925 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-18 02:39:51.080955 | orchestrator | osism.cfg-generics: 2026-02-18 02:39:51.244882 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-18 02:39:51.245018 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-18 02:39:51.245055 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-18 02:39:51.245294 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-18 02:39:51.935586 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-18 02:39:51.946479 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-18 02:39:52.310665 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-18 02:39:52.373373 | orchestrator | ~ 2026-02-18 02:39:52.373481 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 02:39:52.373498 | orchestrator | + deactivate 2026-02-18 02:39:52.373511 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-18 02:39:52.373525 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 02:39:52.373536 | orchestrator | + export PATH 2026-02-18 02:39:52.373547 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-18 02:39:52.373559 | orchestrator | + '[' -n '' ']' 2026-02-18 02:39:52.373598 | orchestrator | + hash -r 2026-02-18 02:39:52.373610 | orchestrator | + '[' -n '' ']' 2026-02-18 02:39:52.373621 | orchestrator | + unset VIRTUAL_ENV 2026-02-18 02:39:52.373632 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-18 02:39:52.373643 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-18 02:39:52.373654 | orchestrator | + unset -f deactivate 2026-02-18 02:39:52.373665 | orchestrator | + popd 2026-02-18 02:39:52.375623 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-18 02:39:52.375678 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-18 02:39:52.375985 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-18 02:39:52.439012 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 02:39:52.439123 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-18 02:39:52.439903 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-18 02:39:52.505981 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-18 02:39:52.506806 | orchestrator | ++ semver 2024.2 2025.1 2026-02-18 02:39:52.564351 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-18 02:39:52.564446 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-18 02:39:52.666830 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 02:39:52.666986 | orchestrator | + source /opt/venv/bin/activate 2026-02-18 02:39:52.667013 | orchestrator | ++ deactivate nondestructive 2026-02-18 02:39:52.667033 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:52.667052 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:52.667104 | orchestrator | ++ hash -r 2026-02-18 02:39:52.667132 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:52.667152 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-18 02:39:52.667179 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-18 02:39:52.667216 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-18 02:39:52.667234 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-18 02:39:52.667246 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-18 02:39:52.667258 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-18 02:39:52.667269 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-18 02:39:52.667281 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 02:39:52.667321 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 02:39:52.667333 | orchestrator | ++ export PATH 2026-02-18 02:39:52.667345 | orchestrator | ++ '[' -n '' ']' 2026-02-18 02:39:52.667360 | orchestrator | ++ '[' -z '' ']' 2026-02-18 02:39:52.667379 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-18 02:39:52.667390 | orchestrator | ++ PS1='(venv) ' 2026-02-18 02:39:52.667401 | orchestrator | ++ export PS1 2026-02-18 02:39:52.667412 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-18 02:39:52.667423 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-18 02:39:52.667439 | orchestrator | ++ hash -r 2026-02-18 02:39:52.667455 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-18 02:39:54.122495 | orchestrator | 2026-02-18 02:39:54.122627 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-18 02:39:54.123562 | orchestrator | 2026-02-18 02:39:54.123613 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-18 02:39:54.759417 | orchestrator | ok: [testbed-manager] 2026-02-18 02:39:54.759522 | orchestrator | 2026-02-18 02:39:54.759537 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-18 02:39:55.809530 | orchestrator | changed: [testbed-manager] 2026-02-18 02:39:55.809649 | orchestrator | 2026-02-18 02:39:55.809673 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-18 02:39:55.809797 | orchestrator | 2026-02-18 02:39:55.809813 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:39:58.338283 | orchestrator | ok: [testbed-manager] 2026-02-18 02:39:58.338492 | orchestrator | 2026-02-18 02:39:58.338523 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-18 02:39:58.388047 | orchestrator | ok: [testbed-manager] 2026-02-18 02:39:58.388150 | orchestrator | 2026-02-18 02:39:58.388166 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-18 02:39:58.885847 | orchestrator | changed: [testbed-manager] 2026-02-18 02:39:58.885928 | orchestrator | 2026-02-18 02:39:58.885940 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-18 02:39:58.927662 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:39:58.927788 | orchestrator | 2026-02-18 02:39:58.927806 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-18 02:39:59.287653 | orchestrator | changed: [testbed-manager] 2026-02-18 02:39:59.287786 | orchestrator | 2026-02-18 02:39:59.287801 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-18 02:39:59.637963 | orchestrator | ok: [testbed-manager] 2026-02-18 02:39:59.638204 | orchestrator | 2026-02-18 02:39:59.638233 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-18 02:39:59.758770 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:39:59.758873 | orchestrator | 2026-02-18 02:39:59.758890 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-18 02:39:59.758908 | orchestrator | 2026-02-18 02:39:59.758928 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:40:02.573657 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:02.573808 | orchestrator | 2026-02-18 02:40:02.573823 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-18 02:40:02.694807 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-18 02:40:02.694916 | orchestrator | 2026-02-18 02:40:02.694955 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-18 02:40:02.754737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-18 02:40:02.754865 | orchestrator | 2026-02-18 02:40:02.754892 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-18 02:40:03.914671 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-18 02:40:03.914810 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-18 02:40:03.914818 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-18 02:40:03.914823 | orchestrator | 2026-02-18 02:40:03.914828 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-18 02:40:05.824351 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-18 02:40:05.824450 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-18 02:40:05.824462 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-18 02:40:05.824472 | orchestrator | 2026-02-18 02:40:05.824482 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-18 02:40:06.502802 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-18 02:40:06.502922 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:06.502939 | orchestrator | 2026-02-18 02:40:06.502951 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-18 02:40:07.171303 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-18 02:40:07.171391 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:07.171404 | orchestrator | 2026-02-18 02:40:07.171413 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-18 02:40:07.231261 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:40:07.231346 | orchestrator | 2026-02-18 02:40:07.231360 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-18 02:40:07.603690 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:07.603822 | orchestrator | 2026-02-18 02:40:07.603832 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-18 02:40:07.682889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-18 02:40:07.682972 | orchestrator | 2026-02-18 02:40:07.682980 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-18 02:40:08.935698 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:08.935869 | orchestrator | 2026-02-18 02:40:08.935888 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-18 02:40:09.853345 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:09.853424 | orchestrator | 2026-02-18 02:40:09.853433 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-18 02:40:19.682343 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:19.682463 | orchestrator | 2026-02-18 02:40:19.682481 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-18 02:40:19.752223 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:40:19.752368 | orchestrator | 2026-02-18 02:40:19.752410 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-18 02:40:19.752424 | orchestrator | 2026-02-18 02:40:19.752435 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:40:21.717481 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:21.717580 | orchestrator | 2026-02-18 02:40:21.717593 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-18 02:40:21.859235 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-18 02:40:21.859309 | orchestrator | 2026-02-18 02:40:21.859317 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-18 02:40:21.925843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 02:40:21.925950 | orchestrator | 2026-02-18 02:40:21.925967 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-18 02:40:24.691960 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:24.692058 | orchestrator | 2026-02-18 02:40:24.692072 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-18 02:40:24.753870 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:24.753960 | orchestrator | 2026-02-18 02:40:24.753979 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-18 02:40:24.903301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-18 02:40:24.903393 | orchestrator | 2026-02-18 02:40:24.903403 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-18 02:40:27.987625 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-18 02:40:27.987713 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-18 02:40:27.987764 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-18 02:40:27.987773 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-18 02:40:27.987782 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-18 02:40:27.987791 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-18 02:40:27.987799 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-18 02:40:27.987807 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-18 02:40:27.987816 | orchestrator | 2026-02-18 02:40:27.987825 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-18 02:40:28.649400 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:28.649474 | orchestrator | 2026-02-18 02:40:28.649482 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-18 02:40:29.327346 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:29.327440 | orchestrator | 2026-02-18 02:40:29.327455 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-18 02:40:29.415320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-18 02:40:29.415423 | orchestrator | 2026-02-18 02:40:29.415439 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-18 02:40:30.716378 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-18 02:40:30.716477 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-18 02:40:30.716490 | orchestrator | 2026-02-18 02:40:30.716500 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-18 02:40:31.393124 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:31.393255 | orchestrator | 2026-02-18 02:40:31.393277 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-18 02:40:31.442285 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:40:31.442365 | orchestrator | 2026-02-18 02:40:31.442374 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-18 02:40:31.525007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-18 02:40:31.525113 | orchestrator | 2026-02-18 02:40:31.525130 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-18 02:40:32.206822 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:32.206924 | orchestrator | 2026-02-18 02:40:32.206940 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-18 02:40:32.271034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-18 02:40:32.271134 | orchestrator | 2026-02-18 02:40:32.271150 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-18 02:40:33.696171 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-18 02:40:33.696282 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-18 02:40:33.696303 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:33.696325 | orchestrator | 2026-02-18 02:40:33.696345 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-18 02:40:34.361547 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:34.361624 | orchestrator | 2026-02-18 02:40:34.361633 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-18 02:40:34.432421 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:40:34.432499 | orchestrator | 2026-02-18 02:40:34.432509 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-18 02:40:34.552185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-18 02:40:34.552280 | orchestrator | 2026-02-18 02:40:34.552296 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-18 02:40:35.138561 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:35.139536 | orchestrator | 2026-02-18 02:40:35.139575 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-18 02:40:35.574096 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:35.574174 | orchestrator | 2026-02-18 02:40:35.574184 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-18 02:40:36.896276 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-18 02:40:36.896390 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-18 02:40:36.896407 | orchestrator | 2026-02-18 02:40:36.896420 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-18 02:40:37.578295 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:37.578398 | orchestrator | 2026-02-18 02:40:37.578415 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-18 02:40:37.986065 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:37.986164 | orchestrator | 2026-02-18 02:40:37.986179 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-18 02:40:38.373026 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:38.373120 | orchestrator | 2026-02-18 02:40:38.373134 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-18 02:40:38.426262 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:40:38.426365 | orchestrator | 2026-02-18 02:40:38.426381 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-18 02:40:38.497421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-18 02:40:38.497554 | orchestrator | 2026-02-18 02:40:38.497570 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-18 02:40:38.543796 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:38.543895 | orchestrator | 2026-02-18 02:40:38.543926 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-18 02:40:40.679209 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-18 02:40:40.680132 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-18 02:40:40.680174 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-18 02:40:40.680187 | orchestrator | 2026-02-18 02:40:40.680199 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-18 02:40:41.440017 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:41.440090 | orchestrator | 2026-02-18 02:40:41.440097 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-18 02:40:42.160634 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:42.160786 | orchestrator | 2026-02-18 02:40:42.160801 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-18 02:40:42.883406 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:42.883509 | orchestrator | 2026-02-18 02:40:42.883555 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-18 02:40:42.975028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-18 02:40:42.975121 | orchestrator | 2026-02-18 02:40:42.975136 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-18 02:40:43.014372 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:43.014454 | orchestrator | 2026-02-18 02:40:43.014465 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-18 02:40:43.755514 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-18 02:40:43.755637 | orchestrator | 2026-02-18 02:40:43.755665 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-18 02:40:43.842345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-18 02:40:43.842449 | orchestrator | 2026-02-18 02:40:43.842466 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-18 02:40:44.640619 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:44.640705 | orchestrator | 2026-02-18 02:40:44.640717 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-18 02:40:45.280325 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:45.280418 | orchestrator | 2026-02-18 02:40:45.280433 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-18 02:40:45.343213 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:40:45.343294 | orchestrator | 2026-02-18 02:40:45.343303 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-18 02:40:45.403266 | orchestrator | ok: [testbed-manager] 2026-02-18 02:40:45.403339 | orchestrator | 2026-02-18 02:40:45.403348 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-18 02:40:46.236203 | orchestrator | changed: [testbed-manager] 2026-02-18 02:40:46.236311 | orchestrator | 2026-02-18 02:40:46.236328 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-18 02:41:58.965047 | orchestrator | changed: [testbed-manager] 2026-02-18 02:41:58.965157 | orchestrator | 2026-02-18 02:41:58.965173 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-18 02:42:00.052591 | orchestrator | ok: [testbed-manager] 2026-02-18 02:42:00.052671 | orchestrator | 2026-02-18 02:42:00.052681 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-18 02:42:00.100152 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:42:00.100276 | orchestrator | 2026-02-18 02:42:00.100303 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-18 02:42:02.918760 | orchestrator | changed: [testbed-manager] 2026-02-18 02:42:02.918970 | orchestrator | 2026-02-18 02:42:02.918992 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-18 02:42:02.973651 | orchestrator | ok: [testbed-manager] 2026-02-18 02:42:02.973742 | orchestrator | 2026-02-18 02:42:02.973754 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-18 02:42:02.973764 | orchestrator | 2026-02-18 02:42:02.973773 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-18 02:42:03.159619 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:42:03.159699 | orchestrator | 2026-02-18 02:42:03.159708 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-18 02:43:03.212288 | orchestrator | Pausing for 60 seconds 2026-02-18 02:43:03.212423 | orchestrator | changed: [testbed-manager] 2026-02-18 02:43:03.212454 | orchestrator | 2026-02-18 02:43:03.212503 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-18 02:43:06.401309 | orchestrator | changed: [testbed-manager] 2026-02-18 02:43:06.401411 | orchestrator | 2026-02-18 02:43:06.401423 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-18 02:44:08.558580 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-18 02:44:08.558678 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-18 02:44:08.558708 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-18 02:44:08.558718 | orchestrator | changed: [testbed-manager] 2026-02-18 02:44:08.558727 | orchestrator | 2026-02-18 02:44:08.558736 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-18 02:44:20.665670 | orchestrator | changed: [testbed-manager] 2026-02-18 02:44:20.665786 | orchestrator | 2026-02-18 02:44:20.665802 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-18 02:44:20.750271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-18 02:44:20.750365 | orchestrator | 2026-02-18 02:44:20.750380 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-18 02:44:20.750393 | orchestrator | 2026-02-18 02:44:20.750404 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-18 02:44:20.798808 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:44:20.798905 | orchestrator | 2026-02-18 02:44:20.798923 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-18 02:44:20.872037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-18 02:44:20.872112 | orchestrator | 2026-02-18 02:44:20.872121 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-18 02:44:21.730618 | orchestrator | changed: [testbed-manager] 2026-02-18 02:44:21.730795 | orchestrator | 2026-02-18 02:44:21.730813 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-18 02:44:25.037486 | orchestrator | ok: [testbed-manager] 2026-02-18 02:44:25.037557 | orchestrator | 2026-02-18 02:44:25.037564 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-18 02:44:25.120447 | orchestrator | ok: [testbed-manager] => { 2026-02-18 02:44:25.120552 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-18 02:44:25.120575 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-18 02:44:25.120592 | orchestrator | "Checking running containers against expected versions...", 2026-02-18 02:44:25.120610 | orchestrator | "", 2026-02-18 02:44:25.120621 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-18 02:44:25.120630 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-18 02:44:25.120640 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.120649 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-18 02:44:25.120658 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.120668 | orchestrator | "", 2026-02-18 02:44:25.120683 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-18 02:44:25.120737 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-18 02:44:25.120757 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.120772 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-18 02:44:25.120787 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.120797 | orchestrator | "", 2026-02-18 02:44:25.120806 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-18 02:44:25.120815 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-18 02:44:25.120824 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.120832 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-18 02:44:25.120841 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.120850 | orchestrator | "", 2026-02-18 02:44:25.120858 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-18 02:44:25.120895 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-18 02:44:25.120904 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.120913 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-18 02:44:25.120922 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.120930 | orchestrator | "", 2026-02-18 02:44:25.120941 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-18 02:44:25.120950 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-18 02:44:25.120958 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.120967 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-18 02:44:25.120975 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.120984 | orchestrator | "", 2026-02-18 02:44:25.121016 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-18 02:44:25.121026 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121035 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121044 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121053 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121061 | orchestrator | "", 2026-02-18 02:44:25.121070 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-18 02:44:25.121079 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-18 02:44:25.121087 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121097 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-18 02:44:25.121106 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121115 | orchestrator | "", 2026-02-18 02:44:25.121123 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-18 02:44:25.121132 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-18 02:44:25.121141 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121150 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-18 02:44:25.121158 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121167 | orchestrator | "", 2026-02-18 02:44:25.121175 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-18 02:44:25.121184 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-18 02:44:25.121193 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121201 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-18 02:44:25.121210 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121219 | orchestrator | "", 2026-02-18 02:44:25.121228 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-18 02:44:25.121236 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-18 02:44:25.121245 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121253 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-18 02:44:25.121262 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121271 | orchestrator | "", 2026-02-18 02:44:25.121280 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-18 02:44:25.121296 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121305 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121313 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121322 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121331 | orchestrator | "", 2026-02-18 02:44:25.121339 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-18 02:44:25.121348 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121357 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121365 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121378 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121393 | orchestrator | "", 2026-02-18 02:44:25.121406 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-18 02:44:25.121420 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121436 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121449 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121464 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121474 | orchestrator | "", 2026-02-18 02:44:25.121482 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-18 02:44:25.121491 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121500 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121508 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121533 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121543 | orchestrator | "", 2026-02-18 02:44:25.121551 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-18 02:44:25.121560 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121577 | orchestrator | " Enabled: true", 2026-02-18 02:44:25.121586 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-18 02:44:25.121595 | orchestrator | " Status: ✅ MATCH", 2026-02-18 02:44:25.121603 | orchestrator | "", 2026-02-18 02:44:25.121612 | orchestrator | "=== Summary ===", 2026-02-18 02:44:25.121620 | orchestrator | "Errors (version mismatches): 0", 2026-02-18 02:44:25.121629 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-18 02:44:25.121638 | orchestrator | "", 2026-02-18 02:44:25.121647 | orchestrator | "✅ All running containers match expected versions!" 2026-02-18 02:44:25.121656 | orchestrator | ] 2026-02-18 02:44:25.121664 | orchestrator | } 2026-02-18 02:44:25.121679 | orchestrator | 2026-02-18 02:44:25.121694 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-18 02:44:25.182357 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:44:25.182478 | orchestrator | 2026-02-18 02:44:25.182501 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:44:25.182523 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-18 02:44:25.182541 | orchestrator | 2026-02-18 02:44:25.323499 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 02:44:25.323650 | orchestrator | + deactivate 2026-02-18 02:44:25.323679 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-18 02:44:25.323798 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 02:44:25.323822 | orchestrator | + export PATH 2026-02-18 02:44:25.323834 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-18 02:44:25.323845 | orchestrator | + '[' -n '' ']' 2026-02-18 02:44:25.323856 | orchestrator | + hash -r 2026-02-18 02:44:25.323867 | orchestrator | + '[' -n '' ']' 2026-02-18 02:44:25.323879 | orchestrator | + unset VIRTUAL_ENV 2026-02-18 02:44:25.323890 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-18 02:44:25.323901 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-18 02:44:25.323912 | orchestrator | + unset -f deactivate 2026-02-18 02:44:25.323924 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-18 02:44:25.330334 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-18 02:44:25.330415 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-18 02:44:25.330456 | orchestrator | + local max_attempts=60 2026-02-18 02:44:25.330469 | orchestrator | + local name=ceph-ansible 2026-02-18 02:44:25.330480 | orchestrator | + local attempt_num=1 2026-02-18 02:44:25.331289 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 02:44:25.367758 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 02:44:25.367838 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-18 02:44:25.367848 | orchestrator | + local max_attempts=60 2026-02-18 02:44:25.367855 | orchestrator | + local name=kolla-ansible 2026-02-18 02:44:25.367862 | orchestrator | + local attempt_num=1 2026-02-18 02:44:25.368718 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-18 02:44:25.405147 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 02:44:25.405221 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-18 02:44:25.405231 | orchestrator | + local max_attempts=60 2026-02-18 02:44:25.405239 | orchestrator | + local name=osism-ansible 2026-02-18 02:44:25.405246 | orchestrator | + local attempt_num=1 2026-02-18 02:44:25.406331 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-18 02:44:25.444833 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 02:44:25.445151 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-18 02:44:25.445177 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-18 02:44:26.204593 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-18 02:44:26.413417 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-18 02:44:26.413514 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-18 02:44:26.413529 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-18 02:44:26.413540 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-18 02:44:26.413552 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-18 02:44:26.413582 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-18 02:44:26.413593 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-18 02:44:26.413603 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-18 02:44:26.413612 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-18 02:44:26.413622 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-18 02:44:26.413632 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-18 02:44:26.413649 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-18 02:44:26.413667 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-18 02:44:26.413710 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-18 02:44:26.413728 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-18 02:44:26.413746 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-18 02:44:26.423901 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-18 02:44:26.493585 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 02:44:26.493675 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-18 02:44:26.499281 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-18 02:44:38.937647 | orchestrator | 2026-02-18 02:44:38 | INFO  | Task 6c2b12d0-1537-4a36-b1f9-7ec25c2c1291 (resolvconf) was prepared for execution. 2026-02-18 02:44:38.937760 | orchestrator | 2026-02-18 02:44:38 | INFO  | It takes a moment until task 6c2b12d0-1537-4a36-b1f9-7ec25c2c1291 (resolvconf) has been started and output is visible here. 2026-02-18 02:44:54.003608 | orchestrator | 2026-02-18 02:44:54.003723 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-18 02:44:54.003740 | orchestrator | 2026-02-18 02:44:54.003753 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:44:54.003765 | orchestrator | Wednesday 18 February 2026 02:44:43 +0000 (0:00:00.154) 0:00:00.154 **** 2026-02-18 02:44:54.003776 | orchestrator | ok: [testbed-manager] 2026-02-18 02:44:54.003788 | orchestrator | 2026-02-18 02:44:54.003800 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-18 02:44:54.003812 | orchestrator | Wednesday 18 February 2026 02:44:47 +0000 (0:00:03.964) 0:00:04.118 **** 2026-02-18 02:44:54.003823 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:44:54.003835 | orchestrator | 2026-02-18 02:44:54.003846 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-18 02:44:54.003857 | orchestrator | Wednesday 18 February 2026 02:44:47 +0000 (0:00:00.074) 0:00:04.193 **** 2026-02-18 02:44:54.003869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-18 02:44:54.003881 | orchestrator | 2026-02-18 02:44:54.003892 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-18 02:44:54.003903 | orchestrator | Wednesday 18 February 2026 02:44:47 +0000 (0:00:00.088) 0:00:04.282 **** 2026-02-18 02:44:54.003934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 02:44:54.003946 | orchestrator | 2026-02-18 02:44:54.003957 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-18 02:44:54.003968 | orchestrator | Wednesday 18 February 2026 02:44:47 +0000 (0:00:00.088) 0:00:04.370 **** 2026-02-18 02:44:54.003979 | orchestrator | ok: [testbed-manager] 2026-02-18 02:44:54.003990 | orchestrator | 2026-02-18 02:44:54.004001 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-18 02:44:54.004012 | orchestrator | Wednesday 18 February 2026 02:44:48 +0000 (0:00:01.223) 0:00:05.594 **** 2026-02-18 02:44:54.004023 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:44:54.004065 | orchestrator | 2026-02-18 02:44:54.004080 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-18 02:44:54.004091 | orchestrator | Wednesday 18 February 2026 02:44:48 +0000 (0:00:00.056) 0:00:05.651 **** 2026-02-18 02:44:54.004129 | orchestrator | ok: [testbed-manager] 2026-02-18 02:44:54.004143 | orchestrator | 2026-02-18 02:44:54.004156 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-18 02:44:54.004168 | orchestrator | Wednesday 18 February 2026 02:44:49 +0000 (0:00:00.550) 0:00:06.201 **** 2026-02-18 02:44:54.004180 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:44:54.004193 | orchestrator | 2026-02-18 02:44:54.004205 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-18 02:44:54.004219 | orchestrator | Wednesday 18 February 2026 02:44:49 +0000 (0:00:00.090) 0:00:06.291 **** 2026-02-18 02:44:54.004231 | orchestrator | changed: [testbed-manager] 2026-02-18 02:44:54.004242 | orchestrator | 2026-02-18 02:44:54.004253 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-18 02:44:54.004264 | orchestrator | Wednesday 18 February 2026 02:44:50 +0000 (0:00:00.629) 0:00:06.921 **** 2026-02-18 02:44:54.004275 | orchestrator | changed: [testbed-manager] 2026-02-18 02:44:54.004286 | orchestrator | 2026-02-18 02:44:54.004297 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-18 02:44:54.004315 | orchestrator | Wednesday 18 February 2026 02:44:51 +0000 (0:00:01.142) 0:00:08.064 **** 2026-02-18 02:44:54.004334 | orchestrator | ok: [testbed-manager] 2026-02-18 02:44:54.004353 | orchestrator | 2026-02-18 02:44:54.004371 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-18 02:44:54.004390 | orchestrator | Wednesday 18 February 2026 02:44:52 +0000 (0:00:01.028) 0:00:09.092 **** 2026-02-18 02:44:54.004408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-18 02:44:54.004428 | orchestrator | 2026-02-18 02:44:54.004446 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-18 02:44:54.004465 | orchestrator | Wednesday 18 February 2026 02:44:52 +0000 (0:00:00.110) 0:00:09.202 **** 2026-02-18 02:44:54.004483 | orchestrator | changed: [testbed-manager] 2026-02-18 02:44:54.004502 | orchestrator | 2026-02-18 02:44:54.004521 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:44:54.004541 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 02:44:54.004560 | orchestrator | 2026-02-18 02:44:54.004576 | orchestrator | 2026-02-18 02:44:54.004587 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:44:54.004598 | orchestrator | Wednesday 18 February 2026 02:44:53 +0000 (0:00:01.234) 0:00:10.436 **** 2026-02-18 02:44:54.004608 | orchestrator | =============================================================================== 2026-02-18 02:44:54.004619 | orchestrator | Gathering Facts --------------------------------------------------------- 3.96s 2026-02-18 02:44:54.004630 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.23s 2026-02-18 02:44:54.004641 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.22s 2026-02-18 02:44:54.004651 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.14s 2026-02-18 02:44:54.004662 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.03s 2026-02-18 02:44:54.004673 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2026-02-18 02:44:54.004704 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-02-18 02:44:54.004715 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.11s 2026-02-18 02:44:54.004732 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-02-18 02:44:54.004752 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-18 02:44:54.004780 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-18 02:44:54.004798 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-18 02:44:54.004829 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-18 02:44:54.375595 | orchestrator | + osism apply sshconfig 2026-02-18 02:45:06.502794 | orchestrator | 2026-02-18 02:45:06 | INFO  | Task df7a0c72-13fe-450f-9c58-c4e9c5e5037a (sshconfig) was prepared for execution. 2026-02-18 02:45:06.502880 | orchestrator | 2026-02-18 02:45:06 | INFO  | It takes a moment until task df7a0c72-13fe-450f-9c58-c4e9c5e5037a (sshconfig) has been started and output is visible here. 2026-02-18 02:45:19.024644 | orchestrator | 2026-02-18 02:45:19.024735 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-18 02:45:19.024745 | orchestrator | 2026-02-18 02:45:19.024768 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-18 02:45:19.024776 | orchestrator | Wednesday 18 February 2026 02:45:10 +0000 (0:00:00.208) 0:00:00.208 **** 2026-02-18 02:45:19.024782 | orchestrator | ok: [testbed-manager] 2026-02-18 02:45:19.024790 | orchestrator | 2026-02-18 02:45:19.024797 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-18 02:45:19.024803 | orchestrator | Wednesday 18 February 2026 02:45:11 +0000 (0:00:00.560) 0:00:00.768 **** 2026-02-18 02:45:19.024810 | orchestrator | changed: [testbed-manager] 2026-02-18 02:45:19.024818 | orchestrator | 2026-02-18 02:45:19.024824 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-18 02:45:19.024831 | orchestrator | Wednesday 18 February 2026 02:45:12 +0000 (0:00:00.559) 0:00:01.328 **** 2026-02-18 02:45:19.024837 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-18 02:45:19.024844 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-18 02:45:19.024851 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-18 02:45:19.024857 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-18 02:45:19.024864 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-18 02:45:19.024870 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-18 02:45:19.024876 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-18 02:45:19.024882 | orchestrator | 2026-02-18 02:45:19.024889 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-18 02:45:19.024895 | orchestrator | Wednesday 18 February 2026 02:45:18 +0000 (0:00:05.963) 0:00:07.292 **** 2026-02-18 02:45:19.024902 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:45:19.024908 | orchestrator | 2026-02-18 02:45:19.024914 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-18 02:45:19.024921 | orchestrator | Wednesday 18 February 2026 02:45:18 +0000 (0:00:00.079) 0:00:07.371 **** 2026-02-18 02:45:19.024927 | orchestrator | changed: [testbed-manager] 2026-02-18 02:45:19.024933 | orchestrator | 2026-02-18 02:45:19.024940 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:45:19.024947 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 02:45:19.024954 | orchestrator | 2026-02-18 02:45:19.024961 | orchestrator | 2026-02-18 02:45:19.024967 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:45:19.024973 | orchestrator | Wednesday 18 February 2026 02:45:18 +0000 (0:00:00.604) 0:00:07.976 **** 2026-02-18 02:45:19.024980 | orchestrator | =============================================================================== 2026-02-18 02:45:19.024986 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.96s 2026-02-18 02:45:19.024993 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-02-18 02:45:19.024999 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-02-18 02:45:19.025005 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.56s 2026-02-18 02:45:19.025031 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-18 02:45:19.399630 | orchestrator | + osism apply known-hosts 2026-02-18 02:45:31.518812 | orchestrator | 2026-02-18 02:45:31 | INFO  | Task 7836d0f2-a431-4ac8-80b8-8f620c4885ed (known-hosts) was prepared for execution. 2026-02-18 02:45:31.518942 | orchestrator | 2026-02-18 02:45:31 | INFO  | It takes a moment until task 7836d0f2-a431-4ac8-80b8-8f620c4885ed (known-hosts) has been started and output is visible here. 2026-02-18 02:45:49.134682 | orchestrator | 2026-02-18 02:45:49.134827 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-18 02:45:49.134853 | orchestrator | 2026-02-18 02:45:49.134872 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-18 02:45:49.134893 | orchestrator | Wednesday 18 February 2026 02:45:35 +0000 (0:00:00.189) 0:00:00.189 **** 2026-02-18 02:45:49.134913 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-18 02:45:49.134934 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-18 02:45:49.134952 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-18 02:45:49.134970 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-18 02:45:49.134987 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-18 02:45:49.135006 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-18 02:45:49.135027 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-18 02:45:49.135047 | orchestrator | 2026-02-18 02:45:49.135067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-18 02:45:49.135090 | orchestrator | Wednesday 18 February 2026 02:45:42 +0000 (0:00:06.062) 0:00:06.252 **** 2026-02-18 02:45:49.135111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-18 02:45:49.135189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-18 02:45:49.135212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-18 02:45:49.135234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-18 02:45:49.135255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-18 02:45:49.135295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-18 02:45:49.135317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-18 02:45:49.135337 | orchestrator | 2026-02-18 02:45:49.135357 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:45:49.135377 | orchestrator | Wednesday 18 February 2026 02:45:42 +0000 (0:00:00.186) 0:00:06.438 **** 2026-02-18 02:45:49.135396 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL3yt453x3SRn93D6uYZjjuheMttoR/CUbCRkpQ37ZNW) 2026-02-18 02:45:49.135431 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzaotjQoYZFeP6Hnie4oDuUhqhjUhtTUjLUwcoNnXrCo3xkxBpujQs5v5g4IScmKuDSAi6XN+SkRJk8OJx+rGYLt1VegtQbP2GNG1CjGyCaaHNX9piII73GOz0d8fCUCNVDQ17LRTqzNwVnEf7FIbu3Z79jqdBQAGPEh2w7daAhY0z4+kHFyCIGA/cNp88XILdL4XvJqa1loARbaUzk5FO1+0GV4niDKsbkeFoN/YxcTnlM87AE9RQhCLvDCcz/1kvBYmwMyoWuFkGEX8fxljnH8luA9KIHuIBGFTJvP6+SC97BLNGb0Y75fjh3usCDoUWCp3fFjhveGpCMACX3hgVx/BKvHcxOG8YvV0YOISte420YqBuMUFmqHvOGpO3U2+8Ol/GB23yeSDI5mVPLOzX3DjZz04FF0omzXoJWtwm9zyVez18bucDtn6Sb/ogtRfm6XO+2QE3Z6KZIBeyYCAxhRAo8qwJSX9iIPVfWAECF6q78ZuBPrhqg2xluXfkVu8=) 2026-02-18 02:45:49.135493 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDmyZvMJB5Urw9DthXsMoQX2Rxz42HOZECwWD08p5EUDlmvqZvOsqbe8/s2UcmJ64M4ah6iCdPJmagKZD9RieZE=) 2026-02-18 02:45:49.135517 | orchestrator | 2026-02-18 02:45:49.135535 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:45:49.135554 | orchestrator | Wednesday 18 February 2026 02:45:43 +0000 (0:00:01.240) 0:00:07.678 **** 2026-02-18 02:45:49.135602 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDINvgDbfTKU8g/LQIVE4mnsEGpkVArOSzenhanIpPiv7UQdb4izElRrYQW/aKUsD9G0Hl2z1DwEfSHS+QFQ0Py4bU3vS5uS2ixKMpGHm1D+DKYLA9vHKyUzmbaJdEbp7/z+I+0fOti+zCG8RX+9zYjwMri2PBZvgTUWiSkUAuzo22BNPev1KYkMkKtQptaGp9fmqjC77/fb3LKVfCTjsHiUhtyqnLVPBYM5gNGF51y2i7HUH8eNkbGRzvepyU945cYTx2oFdoyFvz897PiG9uZZSwr5i1QUxkMnmkLdicgvswFWVW6uni+6yvjV01tas4Rab69sVvTUNZ7XSU77CpVo/hm43+kNdC3Okn5QiNJEWNMnbgjOKglj24Ve+HKkpia4tVfQa6UDKpz0ANc0v/IxU9lo5C7h3uASf7a/FjM6P5kWS4H/i4IoZBogpoF94HNr86UhQUMCIm7A6VMMcvG38nwgu1FzTENneM0w4k/wBYrFfjE4UGbS8RIDhaZ+0M=) 2026-02-18 02:45:49.135626 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNRcjNp4AxetfqmtAX3Xc0M908zHZqN3RkIy8O9A3lzomtnemXoqgkMgLlTi7T77gM/ivabfcoj+QFQt914hTDo=) 2026-02-18 02:45:49.135644 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGUpOk5ujyaZTZ6aRtkXC3x6NtQCqOsaxlS6OD58bWFK) 2026-02-18 02:45:49.135661 | orchestrator | 2026-02-18 02:45:49.135680 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:45:49.135699 | orchestrator | Wednesday 18 February 2026 02:45:44 +0000 (0:00:01.115) 0:00:08.793 **** 2026-02-18 02:45:49.135717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMelA68daUqLuoQ6JgD7a80v2aSQWCCzYYZQgxv733aR) 2026-02-18 02:45:49.135737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGVfXzAaFaIt6Zd82Zmf3B7ezegBo+DFrUtKL/oCr+TUh681X10FQuS1hEyImHR0KibV7oDCjN7E77V6nMVDm6K+B+lfUmxYkURtw5p9cLA5yPkiO1tPL76m/6/6X8BoU9HDjX4WNf6w018NNSmbyOvjTTdNG592Ge1JKW3RN/yDPx+18SGrXjihlJvJmWx2Hp2SpyA7IOgJMErWBvQ1HawQFbM67Lm6imNSXGHoSYi+aNFIW9/fr26gaiKoSDKLwBo4kzVf8guj3cnTV/Z13k87JrkwBvDSxs8vLnzfxyVBd2bDbT/RfuP0REHCLaLszairzHd+j/IPkOBwkhpzIVlWPR7Ad4j5UWbEwYYADiVJSTfPnzybAiwIhgChL9KNv2FoZvjueOvF9UpB9hSJKVLleyuwrw96T518PZHnO47cJPuTReUy3cun33+OdDa3zZmQZe/qAkD+rwbpN8iEZVLjhPAwYaOHSCF1QTWuWvDwxXy/sv/omwm5S9xhQpXw0=) 2026-02-18 02:45:49.135757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMLOnRIZgVDyXOO8vsUb1dlVK/VaAghA+jYWnw1qIAN4fsnzgiBmdCChMFNNAxh8Ytv09BUgi1bkCHTRrfFUNpg=) 2026-02-18 02:45:49.135777 | orchestrator | 2026-02-18 02:45:49.135789 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:45:49.135799 | orchestrator | Wednesday 18 February 2026 02:45:45 +0000 (0:00:01.107) 0:00:09.901 **** 2026-02-18 02:45:49.135810 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNLACwtJcFj2k8W3HaSnHJsPOqe9AwTmvizIlolGOfRbnUAJnVOYnVLkvlH44zlHPEXimQM69Y1DaLkYWJfo31c=) 2026-02-18 02:45:49.135822 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXhEXcpXdAOwD53z28w9desRWiuC1VS0Hav4CkV7sT5J1bi7aUWa3hbLr6xm/Ovai+TIkNpwtQvV4H4Rlqj2eezJgKhi/5+fkyE0sccC91S4XW8yuvZMnDT43x95UhLPDsdXnPQbhzidxKFEMV7jLnDKHMH0btz1GBbTh5shWV8bJ8LdCkqKTkXjGESZAXO8J3EYhoG0BWvEY+74y2z3qtHjEA2yU/Z+6IKIkScpfd6CbM0I4zJPY1sTbs0N4aqfWPhR07zFGxUUSpeHxYRFDNs6ZUV3j60MHlQpMoRfIZ/Z3nEETk/DpM3TI5Q9Bc44h4hKkGscm5T+OSSYIJMZ8Qc1mCAK5dAbLvNHgCcvC6ReMmmJhhYEbocTc87wVh/WU9m8P5ESrzlPDjHpXoaeawKL2ZK7uUDcDbzgAA0RcJp862d6jhSiGps3UeKJ146+ui7+vUtXZ5EpXgFkvxeaYvbeHubQsqIf10HhhfCQ4eEPDZ11ga+1LgfZ84fVK4JHc=) 2026-02-18 02:45:49.135848 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAIMJB+rbMb4THtpN4thKALwV9+HQJqpN/7sLzMmhGT0) 2026-02-18 02:45:49.135859 | orchestrator | 2026-02-18 02:45:49.135870 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:45:49.135881 | orchestrator | Wednesday 18 February 2026 02:45:46 +0000 (0:00:01.153) 0:00:11.055 **** 2026-02-18 02:45:49.135972 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzB1I/Uztnd51FnzIx3yctJqtHDu90sCmuwyB3tFL3scoJGf1eUD4+p3uqN08xqJXKaZSDvP3Z07fe/8XGWiQey1D35mTA4dq2iB50eTNTa+ffYf1dpTP+Pwo6hiFtCJNDNrwvNcP9yDyXI6kd7dO4ClwujqWerDKbZH2jT1xcYA3Li27nVIp1Uh1WQll8oRQHmp0V21rQgi5JQxcpKw1vEytNuPjROXzPemqoEJnX4HBR9X0DsRZvL4hU6JxBwleSgEB/j1IRDDHAdfFwT6sOYLwQUAEfgepkPVTHS6mp08Gdpd2Tp9PxvKaEgO3C/Ly2OeX+/Sv4HsDHiaPC6N6qa9igfwYI3ZrRmFMMTQF9c58q/VIgxiXjmlQqDRbwYL2R5eWZjs/TmkBjU2M1aOv8GIH44YRU6VpPGojhe2tX92GLn/9vkH6ptYMBUDMXdP5SjH5xfDJ3brD+Q13mkeZilLQfvDDhj2BiN62ZkoDj9mNWQNK1bITaGP7qORryYlU=) 2026-02-18 02:45:49.135984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGk0RyQhDaMsoRHTxs1s3SXnmVFjavOMeaQwYTCWqxWTJCsk5uEzFqJREgYowiXRBBM2m5M6A9ZmCk25NiwU7js=) 2026-02-18 02:45:49.135995 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN9EqGPrIspffj6bXtTBFKe43pOkz4PeVkyshI1ZdfJA) 2026-02-18 02:45:49.136006 | orchestrator | 2026-02-18 02:45:49.136017 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:45:49.136027 | orchestrator | Wednesday 18 February 2026 02:45:47 +0000 (0:00:01.129) 0:00:12.185 **** 2026-02-18 02:45:49.136049 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP67ZM3Na+ix2GsBHjiN7uAc2mzQ/JDdORYcRforQEic) 2026-02-18 02:46:00.494608 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCunumMZVCrTaUIDOpyOQgodYs+1eZQccljPYhph9rMoqlrhCvR2dOETfF9y7S+VjMCcXddFalP5mG1Y6cDNcrjLV6hINjb3OPV0pKn53ufCzbmcpGQoh29e84yOy0JW9x1xODo7PQOtwXGOJNhy9dLFX7adQjZZmUvYganBxtuKmNKM7KnxQA/xmqRHcw+BhNMaUT75Kv0EpWas5MIseKmY8CkvC0JtvaCekJ5Z1tW7Jw4G86UBLXklRC4Z+xNpjh6mLtFndoJb4AHqQyV3p+c7t2d7AzyQNXSeYD9TvRcNVwrwiQwoM4ETFw84GFLkBpgANdd8FCCYhDsqzVJKcMvowPbh9PBAMOK/esI5zR89c1hbkSSuXrGLcr3oRKvR3v89jqqPFUs7bUWzQyLaGwmTq1RwcD2u2VF1hxYxdN0ruawF6bLNtEGm6KmWKIPOTSUpVSgysoVR1VtRALFrA++hva6T9zM6UXxYFtUysNNluW+OfVLg1abw4uLzsC1Ds=) 2026-02-18 02:46:00.494764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCvyb3PCjQimVzVQuu48bjwSKf+NOPz+haMgWjYkU5PbmHRV14PRuBZMzf6NFoA+UgvjOtWcnLGBdgwiqfMWYM=) 2026-02-18 02:46:00.494802 | orchestrator | 2026-02-18 02:46:00.494823 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:00.494844 | orchestrator | Wednesday 18 February 2026 02:45:49 +0000 (0:00:01.142) 0:00:13.327 **** 2026-02-18 02:46:00.494863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq7qgMKliyW8K03qioJfNpTN+H44r+idEJeK8rsNkB+Y13nWsiVPvy0DjniHxuChRBBdN9X/LYwmyMgSqtrOyKyXQovgfBLoBqDnDx6oVbanTdnt5bG2juKfqYEHl7Slyj59NFbVh7PHeSMODEJBOf7jSTEtoLh5v8M3401i38aom5T7vw0i7Jke6GYBHM9rYKLLbc/S5BCy/1racqYHomAIyb+TUyzfwgtxa0FFOs+GfdzIIzk8oH2mfK2RC2ahrOkJpCgYkSGM08wrh9x/pSW+5TZSXcv3BAuKKybUYUjNc1+9KfFXU99Cf4GINhoUilGNBXhifIvCp91VMdnEyWgGzo5khvRhb2A4NNnzTAiV6EVEWlV+vgyvKDOv1s8Ga37el3ar3nogaTk39K8dKVtq0MlABOnuoHlTjTQHpnpiVJcZoswytGk+W0DAdDraVR5Opycic0bvHLFEmGjLixcuo1jhP8D4ntQhaaBFW3WYoAXmeAyOWzNyt0ZSkWZtk=) 2026-02-18 02:46:00.494882 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEQqeLAybcQbmpQWeaKx0JFZZYCDO9fbbUDanpWA/bc6FvNhOpexeT259K1GPAaLFstQS9CRt2WQ93Of2DDCd2E=) 2026-02-18 02:46:00.494929 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFFH+8btYIV0udfBBtV0jrnHRlw2iPK5opLBpsyenIn+) 2026-02-18 02:46:00.494950 | orchestrator | 2026-02-18 02:46:00.494968 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-18 02:46:00.494988 | orchestrator | Wednesday 18 February 2026 02:45:50 +0000 (0:00:01.110) 0:00:14.438 **** 2026-02-18 02:46:00.495007 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-18 02:46:00.495027 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-18 02:46:00.495044 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-18 02:46:00.495064 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-18 02:46:00.495084 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-18 02:46:00.495103 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-18 02:46:00.495117 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-18 02:46:00.495130 | orchestrator | 2026-02-18 02:46:00.495190 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-18 02:46:00.495205 | orchestrator | Wednesday 18 February 2026 02:45:55 +0000 (0:00:05.527) 0:00:19.965 **** 2026-02-18 02:46:00.495219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-18 02:46:00.495234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-18 02:46:00.495251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-18 02:46:00.495271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-18 02:46:00.495285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-18 02:46:00.495299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-18 02:46:00.495312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-18 02:46:00.495326 | orchestrator | 2026-02-18 02:46:00.495358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:00.495371 | orchestrator | Wednesday 18 February 2026 02:45:55 +0000 (0:00:00.209) 0:00:20.175 **** 2026-02-18 02:46:00.495391 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDmyZvMJB5Urw9DthXsMoQX2Rxz42HOZECwWD08p5EUDlmvqZvOsqbe8/s2UcmJ64M4ah6iCdPJmagKZD9RieZE=) 2026-02-18 02:46:00.495421 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzaotjQoYZFeP6Hnie4oDuUhqhjUhtTUjLUwcoNnXrCo3xkxBpujQs5v5g4IScmKuDSAi6XN+SkRJk8OJx+rGYLt1VegtQbP2GNG1CjGyCaaHNX9piII73GOz0d8fCUCNVDQ17LRTqzNwVnEf7FIbu3Z79jqdBQAGPEh2w7daAhY0z4+kHFyCIGA/cNp88XILdL4XvJqa1loARbaUzk5FO1+0GV4niDKsbkeFoN/YxcTnlM87AE9RQhCLvDCcz/1kvBYmwMyoWuFkGEX8fxljnH8luA9KIHuIBGFTJvP6+SC97BLNGb0Y75fjh3usCDoUWCp3fFjhveGpCMACX3hgVx/BKvHcxOG8YvV0YOISte420YqBuMUFmqHvOGpO3U2+8Ol/GB23yeSDI5mVPLOzX3DjZz04FF0omzXoJWtwm9zyVez18bucDtn6Sb/ogtRfm6XO+2QE3Z6KZIBeyYCAxhRAo8qwJSX9iIPVfWAECF6q78ZuBPrhqg2xluXfkVu8=) 2026-02-18 02:46:00.495474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL3yt453x3SRn93D6uYZjjuheMttoR/CUbCRkpQ37ZNW) 2026-02-18 02:46:00.495494 | orchestrator | 2026-02-18 02:46:00.495512 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:00.495529 | orchestrator | Wednesday 18 February 2026 02:45:57 +0000 (0:00:01.152) 0:00:21.327 **** 2026-02-18 02:46:00.495546 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDINvgDbfTKU8g/LQIVE4mnsEGpkVArOSzenhanIpPiv7UQdb4izElRrYQW/aKUsD9G0Hl2z1DwEfSHS+QFQ0Py4bU3vS5uS2ixKMpGHm1D+DKYLA9vHKyUzmbaJdEbp7/z+I+0fOti+zCG8RX+9zYjwMri2PBZvgTUWiSkUAuzo22BNPev1KYkMkKtQptaGp9fmqjC77/fb3LKVfCTjsHiUhtyqnLVPBYM5gNGF51y2i7HUH8eNkbGRzvepyU945cYTx2oFdoyFvz897PiG9uZZSwr5i1QUxkMnmkLdicgvswFWVW6uni+6yvjV01tas4Rab69sVvTUNZ7XSU77CpVo/hm43+kNdC3Okn5QiNJEWNMnbgjOKglj24Ve+HKkpia4tVfQa6UDKpz0ANc0v/IxU9lo5C7h3uASf7a/FjM6P5kWS4H/i4IoZBogpoF94HNr86UhQUMCIm7A6VMMcvG38nwgu1FzTENneM0w4k/wBYrFfjE4UGbS8RIDhaZ+0M=) 2026-02-18 02:46:00.495564 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNRcjNp4AxetfqmtAX3Xc0M908zHZqN3RkIy8O9A3lzomtnemXoqgkMgLlTi7T77gM/ivabfcoj+QFQt914hTDo=) 2026-02-18 02:46:00.495582 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGUpOk5ujyaZTZ6aRtkXC3x6NtQCqOsaxlS6OD58bWFK) 2026-02-18 02:46:00.495598 | orchestrator | 2026-02-18 02:46:00.495616 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:00.495635 | orchestrator | Wednesday 18 February 2026 02:45:58 +0000 (0:00:01.146) 0:00:22.474 **** 2026-02-18 02:46:00.495654 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGVfXzAaFaIt6Zd82Zmf3B7ezegBo+DFrUtKL/oCr+TUh681X10FQuS1hEyImHR0KibV7oDCjN7E77V6nMVDm6K+B+lfUmxYkURtw5p9cLA5yPkiO1tPL76m/6/6X8BoU9HDjX4WNf6w018NNSmbyOvjTTdNG592Ge1JKW3RN/yDPx+18SGrXjihlJvJmWx2Hp2SpyA7IOgJMErWBvQ1HawQFbM67Lm6imNSXGHoSYi+aNFIW9/fr26gaiKoSDKLwBo4kzVf8guj3cnTV/Z13k87JrkwBvDSxs8vLnzfxyVBd2bDbT/RfuP0REHCLaLszairzHd+j/IPkOBwkhpzIVlWPR7Ad4j5UWbEwYYADiVJSTfPnzybAiwIhgChL9KNv2FoZvjueOvF9UpB9hSJKVLleyuwrw96T518PZHnO47cJPuTReUy3cun33+OdDa3zZmQZe/qAkD+rwbpN8iEZVLjhPAwYaOHSCF1QTWuWvDwxXy/sv/omwm5S9xhQpXw0=) 2026-02-18 02:46:00.495676 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMLOnRIZgVDyXOO8vsUb1dlVK/VaAghA+jYWnw1qIAN4fsnzgiBmdCChMFNNAxh8Ytv09BUgi1bkCHTRrfFUNpg=) 2026-02-18 02:46:00.495695 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMelA68daUqLuoQ6JgD7a80v2aSQWCCzYYZQgxv733aR) 2026-02-18 02:46:00.495714 | orchestrator | 2026-02-18 02:46:00.495727 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:00.495738 | orchestrator | Wednesday 18 February 2026 02:45:59 +0000 (0:00:01.103) 0:00:23.578 **** 2026-02-18 02:46:00.495749 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNLACwtJcFj2k8W3HaSnHJsPOqe9AwTmvizIlolGOfRbnUAJnVOYnVLkvlH44zlHPEXimQM69Y1DaLkYWJfo31c=) 2026-02-18 02:46:00.495787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXhEXcpXdAOwD53z28w9desRWiuC1VS0Hav4CkV7sT5J1bi7aUWa3hbLr6xm/Ovai+TIkNpwtQvV4H4Rlqj2eezJgKhi/5+fkyE0sccC91S4XW8yuvZMnDT43x95UhLPDsdXnPQbhzidxKFEMV7jLnDKHMH0btz1GBbTh5shWV8bJ8LdCkqKTkXjGESZAXO8J3EYhoG0BWvEY+74y2z3qtHjEA2yU/Z+6IKIkScpfd6CbM0I4zJPY1sTbs0N4aqfWPhR07zFGxUUSpeHxYRFDNs6ZUV3j60MHlQpMoRfIZ/Z3nEETk/DpM3TI5Q9Bc44h4hKkGscm5T+OSSYIJMZ8Qc1mCAK5dAbLvNHgCcvC6ReMmmJhhYEbocTc87wVh/WU9m8P5ESrzlPDjHpXoaeawKL2ZK7uUDcDbzgAA0RcJp862d6jhSiGps3UeKJ146+ui7+vUtXZ5EpXgFkvxeaYvbeHubQsqIf10HhhfCQ4eEPDZ11ga+1LgfZ84fVK4JHc=) 2026-02-18 02:46:05.367471 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAIMJB+rbMb4THtpN4thKALwV9+HQJqpN/7sLzMmhGT0) 2026-02-18 02:46:05.367618 | orchestrator | 2026-02-18 02:46:05.367696 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:05.367722 | orchestrator | Wednesday 18 February 2026 02:46:00 +0000 (0:00:01.108) 0:00:24.686 **** 2026-02-18 02:46:05.367747 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzB1I/Uztnd51FnzIx3yctJqtHDu90sCmuwyB3tFL3scoJGf1eUD4+p3uqN08xqJXKaZSDvP3Z07fe/8XGWiQey1D35mTA4dq2iB50eTNTa+ffYf1dpTP+Pwo6hiFtCJNDNrwvNcP9yDyXI6kd7dO4ClwujqWerDKbZH2jT1xcYA3Li27nVIp1Uh1WQll8oRQHmp0V21rQgi5JQxcpKw1vEytNuPjROXzPemqoEJnX4HBR9X0DsRZvL4hU6JxBwleSgEB/j1IRDDHAdfFwT6sOYLwQUAEfgepkPVTHS6mp08Gdpd2Tp9PxvKaEgO3C/Ly2OeX+/Sv4HsDHiaPC6N6qa9igfwYI3ZrRmFMMTQF9c58q/VIgxiXjmlQqDRbwYL2R5eWZjs/TmkBjU2M1aOv8GIH44YRU6VpPGojhe2tX92GLn/9vkH6ptYMBUDMXdP5SjH5xfDJ3brD+Q13mkeZilLQfvDDhj2BiN62ZkoDj9mNWQNK1bITaGP7qORryYlU=) 2026-02-18 02:46:05.367772 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGk0RyQhDaMsoRHTxs1s3SXnmVFjavOMeaQwYTCWqxWTJCsk5uEzFqJREgYowiXRBBM2m5M6A9ZmCk25NiwU7js=) 2026-02-18 02:46:05.367787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN9EqGPrIspffj6bXtTBFKe43pOkz4PeVkyshI1ZdfJA) 2026-02-18 02:46:05.367802 | orchestrator | 2026-02-18 02:46:05.367821 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:05.367839 | orchestrator | Wednesday 18 February 2026 02:46:01 +0000 (0:00:01.219) 0:00:25.906 **** 2026-02-18 02:46:05.367858 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCvyb3PCjQimVzVQuu48bjwSKf+NOPz+haMgWjYkU5PbmHRV14PRuBZMzf6NFoA+UgvjOtWcnLGBdgwiqfMWYM=) 2026-02-18 02:46:05.367877 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCunumMZVCrTaUIDOpyOQgodYs+1eZQccljPYhph9rMoqlrhCvR2dOETfF9y7S+VjMCcXddFalP5mG1Y6cDNcrjLV6hINjb3OPV0pKn53ufCzbmcpGQoh29e84yOy0JW9x1xODo7PQOtwXGOJNhy9dLFX7adQjZZmUvYganBxtuKmNKM7KnxQA/xmqRHcw+BhNMaUT75Kv0EpWas5MIseKmY8CkvC0JtvaCekJ5Z1tW7Jw4G86UBLXklRC4Z+xNpjh6mLtFndoJb4AHqQyV3p+c7t2d7AzyQNXSeYD9TvRcNVwrwiQwoM4ETFw84GFLkBpgANdd8FCCYhDsqzVJKcMvowPbh9PBAMOK/esI5zR89c1hbkSSuXrGLcr3oRKvR3v89jqqPFUs7bUWzQyLaGwmTq1RwcD2u2VF1hxYxdN0ruawF6bLNtEGm6KmWKIPOTSUpVSgysoVR1VtRALFrA++hva6T9zM6UXxYFtUysNNluW+OfVLg1abw4uLzsC1Ds=) 2026-02-18 02:46:05.367899 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP67ZM3Na+ix2GsBHjiN7uAc2mzQ/JDdORYcRforQEic) 2026-02-18 02:46:05.367918 | orchestrator | 2026-02-18 02:46:05.367938 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-18 02:46:05.367951 | orchestrator | Wednesday 18 February 2026 02:46:02 +0000 (0:00:01.153) 0:00:27.059 **** 2026-02-18 02:46:05.367963 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq7qgMKliyW8K03qioJfNpTN+H44r+idEJeK8rsNkB+Y13nWsiVPvy0DjniHxuChRBBdN9X/LYwmyMgSqtrOyKyXQovgfBLoBqDnDx6oVbanTdnt5bG2juKfqYEHl7Slyj59NFbVh7PHeSMODEJBOf7jSTEtoLh5v8M3401i38aom5T7vw0i7Jke6GYBHM9rYKLLbc/S5BCy/1racqYHomAIyb+TUyzfwgtxa0FFOs+GfdzIIzk8oH2mfK2RC2ahrOkJpCgYkSGM08wrh9x/pSW+5TZSXcv3BAuKKybUYUjNc1+9KfFXU99Cf4GINhoUilGNBXhifIvCp91VMdnEyWgGzo5khvRhb2A4NNnzTAiV6EVEWlV+vgyvKDOv1s8Ga37el3ar3nogaTk39K8dKVtq0MlABOnuoHlTjTQHpnpiVJcZoswytGk+W0DAdDraVR5Opycic0bvHLFEmGjLixcuo1jhP8D4ntQhaaBFW3WYoAXmeAyOWzNyt0ZSkWZtk=) 2026-02-18 02:46:05.367992 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEQqeLAybcQbmpQWeaKx0JFZZYCDO9fbbUDanpWA/bc6FvNhOpexeT259K1GPAaLFstQS9CRt2WQ93Of2DDCd2E=) 2026-02-18 02:46:05.368005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFFH+8btYIV0udfBBtV0jrnHRlw2iPK5opLBpsyenIn+) 2026-02-18 02:46:05.368018 | orchestrator | 2026-02-18 02:46:05.368032 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-18 02:46:05.368056 | orchestrator | Wednesday 18 February 2026 02:46:03 +0000 (0:00:01.139) 0:00:28.199 **** 2026-02-18 02:46:05.368070 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-18 02:46:05.368083 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-18 02:46:05.368095 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-18 02:46:05.368108 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-18 02:46:05.368165 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-18 02:46:05.368197 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-18 02:46:05.368211 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-18 02:46:05.368224 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:46:05.368236 | orchestrator | 2026-02-18 02:46:05.368249 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-18 02:46:05.368262 | orchestrator | Wednesday 18 February 2026 02:46:04 +0000 (0:00:00.195) 0:00:28.394 **** 2026-02-18 02:46:05.368274 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:46:05.368287 | orchestrator | 2026-02-18 02:46:05.368300 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-18 02:46:05.368318 | orchestrator | Wednesday 18 February 2026 02:46:04 +0000 (0:00:00.065) 0:00:28.460 **** 2026-02-18 02:46:05.368331 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:46:05.368345 | orchestrator | 2026-02-18 02:46:05.368358 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-18 02:46:05.368371 | orchestrator | Wednesday 18 February 2026 02:46:04 +0000 (0:00:00.061) 0:00:28.522 **** 2026-02-18 02:46:05.368384 | orchestrator | changed: [testbed-manager] 2026-02-18 02:46:05.368395 | orchestrator | 2026-02-18 02:46:05.368405 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:46:05.368416 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 02:46:05.368428 | orchestrator | 2026-02-18 02:46:05.368439 | orchestrator | 2026-02-18 02:46:05.368450 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:46:05.368461 | orchestrator | Wednesday 18 February 2026 02:46:05 +0000 (0:00:00.775) 0:00:29.297 **** 2026-02-18 02:46:05.368472 | orchestrator | =============================================================================== 2026-02-18 02:46:05.368482 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.06s 2026-02-18 02:46:05.368493 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.53s 2026-02-18 02:46:05.368505 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-02-18 02:46:05.368516 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-18 02:46:05.368527 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-18 02:46:05.368538 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-18 02:46:05.368548 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-18 02:46:05.368559 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-18 02:46:05.368570 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-18 02:46:05.368581 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-18 02:46:05.368592 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-18 02:46:05.368603 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-18 02:46:05.368613 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-18 02:46:05.368624 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-18 02:46:05.368642 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-18 02:46:05.368653 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-18 02:46:05.368664 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.78s 2026-02-18 02:46:05.368674 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2026-02-18 02:46:05.368686 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-02-18 02:46:05.368697 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2026-02-18 02:46:05.717806 | orchestrator | + osism apply squid 2026-02-18 02:46:17.873971 | orchestrator | 2026-02-18 02:46:17 | INFO  | Task 4d4ec5e8-b24a-4d1a-95d1-6a5fda089f28 (squid) was prepared for execution. 2026-02-18 02:46:17.874145 | orchestrator | 2026-02-18 02:46:17 | INFO  | It takes a moment until task 4d4ec5e8-b24a-4d1a-95d1-6a5fda089f28 (squid) has been started and output is visible here. 2026-02-18 02:48:17.650372 | orchestrator | 2026-02-18 02:48:17.650518 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-18 02:48:17.650546 | orchestrator | 2026-02-18 02:48:17.650567 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-18 02:48:17.650586 | orchestrator | Wednesday 18 February 2026 02:46:22 +0000 (0:00:00.174) 0:00:00.174 **** 2026-02-18 02:48:17.650605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 02:48:17.650625 | orchestrator | 2026-02-18 02:48:17.650644 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-18 02:48:17.650663 | orchestrator | Wednesday 18 February 2026 02:46:22 +0000 (0:00:00.103) 0:00:00.278 **** 2026-02-18 02:48:17.650684 | orchestrator | ok: [testbed-manager] 2026-02-18 02:48:17.650704 | orchestrator | 2026-02-18 02:48:17.650726 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-18 02:48:17.650747 | orchestrator | Wednesday 18 February 2026 02:46:23 +0000 (0:00:01.628) 0:00:01.907 **** 2026-02-18 02:48:17.650766 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-18 02:48:17.650785 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-18 02:48:17.650802 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-18 02:48:17.650819 | orchestrator | 2026-02-18 02:48:17.650839 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-18 02:48:17.650859 | orchestrator | Wednesday 18 February 2026 02:46:25 +0000 (0:00:01.228) 0:00:03.135 **** 2026-02-18 02:48:17.650880 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-18 02:48:17.650900 | orchestrator | 2026-02-18 02:48:17.650922 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-18 02:48:17.650942 | orchestrator | Wednesday 18 February 2026 02:46:26 +0000 (0:00:01.115) 0:00:04.251 **** 2026-02-18 02:48:17.650963 | orchestrator | ok: [testbed-manager] 2026-02-18 02:48:17.650985 | orchestrator | 2026-02-18 02:48:17.651004 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-18 02:48:17.651022 | orchestrator | Wednesday 18 February 2026 02:46:26 +0000 (0:00:00.393) 0:00:04.644 **** 2026-02-18 02:48:17.651034 | orchestrator | changed: [testbed-manager] 2026-02-18 02:48:17.651046 | orchestrator | 2026-02-18 02:48:17.651057 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-18 02:48:17.651068 | orchestrator | Wednesday 18 February 2026 02:46:27 +0000 (0:00:00.965) 0:00:05.609 **** 2026-02-18 02:48:17.651080 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-18 02:48:17.651096 | orchestrator | ok: [testbed-manager] 2026-02-18 02:48:17.651107 | orchestrator | 2026-02-18 02:48:17.651118 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-18 02:48:17.651153 | orchestrator | Wednesday 18 February 2026 02:47:04 +0000 (0:00:36.742) 0:00:42.352 **** 2026-02-18 02:48:17.651165 | orchestrator | changed: [testbed-manager] 2026-02-18 02:48:17.651176 | orchestrator | 2026-02-18 02:48:17.651187 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-18 02:48:17.651197 | orchestrator | Wednesday 18 February 2026 02:47:16 +0000 (0:00:12.079) 0:00:54.431 **** 2026-02-18 02:48:17.651208 | orchestrator | Pausing for 60 seconds 2026-02-18 02:48:17.651219 | orchestrator | changed: [testbed-manager] 2026-02-18 02:48:17.651230 | orchestrator | 2026-02-18 02:48:17.651241 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-18 02:48:17.651252 | orchestrator | Wednesday 18 February 2026 02:48:16 +0000 (0:01:00.097) 0:01:54.529 **** 2026-02-18 02:48:17.651263 | orchestrator | ok: [testbed-manager] 2026-02-18 02:48:17.651298 | orchestrator | 2026-02-18 02:48:17.651383 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-18 02:48:17.651399 | orchestrator | Wednesday 18 February 2026 02:48:16 +0000 (0:00:00.080) 0:01:54.610 **** 2026-02-18 02:48:17.651410 | orchestrator | changed: [testbed-manager] 2026-02-18 02:48:17.651421 | orchestrator | 2026-02-18 02:48:17.651431 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:48:17.651442 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:48:17.651453 | orchestrator | 2026-02-18 02:48:17.651464 | orchestrator | 2026-02-18 02:48:17.651475 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:48:17.651486 | orchestrator | Wednesday 18 February 2026 02:48:17 +0000 (0:00:00.665) 0:01:55.275 **** 2026-02-18 02:48:17.651496 | orchestrator | =============================================================================== 2026-02-18 02:48:17.651507 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-02-18 02:48:17.651518 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.74s 2026-02-18 02:48:17.651528 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.08s 2026-02-18 02:48:17.651554 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.63s 2026-02-18 02:48:17.651566 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.23s 2026-02-18 02:48:17.651576 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-02-18 02:48:17.651587 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2026-02-18 02:48:17.651598 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2026-02-18 02:48:17.651608 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2026-02-18 02:48:17.651619 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-02-18 02:48:17.651630 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-02-18 02:48:17.995288 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-18 02:48:17.995447 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-18 02:48:18.051572 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-18 02:48:18.051651 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-18 02:48:18.058963 | orchestrator | + set -e 2026-02-18 02:48:18.059030 | orchestrator | + NAMESPACE=kolla/release 2026-02-18 02:48:18.059037 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-18 02:48:18.063693 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-18 02:48:18.127662 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-18 02:48:18.128217 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-18 02:48:30.307133 | orchestrator | 2026-02-18 02:48:30 | INFO  | Task dd4f56c7-fa0b-45e8-9718-fa08ec423754 (operator) was prepared for execution. 2026-02-18 02:48:30.307215 | orchestrator | 2026-02-18 02:48:30 | INFO  | It takes a moment until task dd4f56c7-fa0b-45e8-9718-fa08ec423754 (operator) has been started and output is visible here. 2026-02-18 02:48:46.551209 | orchestrator | 2026-02-18 02:48:46.551348 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-18 02:48:46.551421 | orchestrator | 2026-02-18 02:48:46.551434 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 02:48:46.551445 | orchestrator | Wednesday 18 February 2026 02:48:34 +0000 (0:00:00.152) 0:00:00.152 **** 2026-02-18 02:48:46.551457 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:48:46.551470 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:48:46.551481 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:48:46.551492 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:48:46.551502 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:48:46.551513 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:48:46.551524 | orchestrator | 2026-02-18 02:48:46.551535 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-18 02:48:46.551546 | orchestrator | Wednesday 18 February 2026 02:48:38 +0000 (0:00:03.441) 0:00:03.594 **** 2026-02-18 02:48:46.551557 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:48:46.551567 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:48:46.551578 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:48:46.551606 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:48:46.551617 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:48:46.551628 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:48:46.551639 | orchestrator | 2026-02-18 02:48:46.551650 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-18 02:48:46.551661 | orchestrator | 2026-02-18 02:48:46.551672 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-18 02:48:46.551683 | orchestrator | Wednesday 18 February 2026 02:48:38 +0000 (0:00:00.794) 0:00:04.388 **** 2026-02-18 02:48:46.551694 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:48:46.551705 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:48:46.551717 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:48:46.551729 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:48:46.551742 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:48:46.551754 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:48:46.551767 | orchestrator | 2026-02-18 02:48:46.551779 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-18 02:48:46.551791 | orchestrator | Wednesday 18 February 2026 02:48:39 +0000 (0:00:00.194) 0:00:04.583 **** 2026-02-18 02:48:46.551803 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:48:46.551815 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:48:46.551827 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:48:46.551839 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:48:46.551851 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:48:46.551862 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:48:46.551874 | orchestrator | 2026-02-18 02:48:46.551886 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-18 02:48:46.551899 | orchestrator | Wednesday 18 February 2026 02:48:39 +0000 (0:00:00.200) 0:00:04.783 **** 2026-02-18 02:48:46.551911 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:48:46.551925 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:48:46.551937 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:48:46.551949 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:48:46.551961 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:48:46.551973 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:48:46.551985 | orchestrator | 2026-02-18 02:48:46.551996 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-18 02:48:46.552007 | orchestrator | Wednesday 18 February 2026 02:48:39 +0000 (0:00:00.770) 0:00:05.554 **** 2026-02-18 02:48:46.552018 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:48:46.552029 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:48:46.552039 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:48:46.552050 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:48:46.552061 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:48:46.552072 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:48:46.552104 | orchestrator | 2026-02-18 02:48:46.552115 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-18 02:48:46.552126 | orchestrator | Wednesday 18 February 2026 02:48:40 +0000 (0:00:00.779) 0:00:06.333 **** 2026-02-18 02:48:46.552137 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-18 02:48:46.552147 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-18 02:48:46.552158 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-18 02:48:46.552168 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-18 02:48:46.552179 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-18 02:48:46.552190 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-18 02:48:46.552200 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-18 02:48:46.552211 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-18 02:48:46.552221 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-18 02:48:46.552232 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-18 02:48:46.552242 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-18 02:48:46.552252 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-18 02:48:46.552263 | orchestrator | 2026-02-18 02:48:46.552274 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-18 02:48:46.552284 | orchestrator | Wednesday 18 February 2026 02:48:41 +0000 (0:00:01.149) 0:00:07.483 **** 2026-02-18 02:48:46.552295 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:48:46.552305 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:48:46.552316 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:48:46.552326 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:48:46.552336 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:48:46.552347 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:48:46.552420 | orchestrator | 2026-02-18 02:48:46.552437 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-18 02:48:46.552449 | orchestrator | Wednesday 18 February 2026 02:48:43 +0000 (0:00:01.203) 0:00:08.687 **** 2026-02-18 02:48:46.552459 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-18 02:48:46.552470 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-18 02:48:46.552481 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-18 02:48:46.552492 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-18 02:48:46.552521 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-18 02:48:46.552533 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-18 02:48:46.552543 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-18 02:48:46.552554 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-18 02:48:46.552564 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-18 02:48:46.552575 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-18 02:48:46.552586 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-18 02:48:46.552596 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-18 02:48:46.552607 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-18 02:48:46.552617 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-18 02:48:46.552627 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-18 02:48:46.552638 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-18 02:48:46.552649 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-18 02:48:46.552660 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-18 02:48:46.552671 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-18 02:48:46.552681 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-18 02:48:46.552701 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-18 02:48:46.552712 | orchestrator | 2026-02-18 02:48:46.552722 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-18 02:48:46.552734 | orchestrator | Wednesday 18 February 2026 02:48:44 +0000 (0:00:01.202) 0:00:09.889 **** 2026-02-18 02:48:46.552745 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:48:46.552756 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:48:46.552766 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:48:46.552777 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:48:46.552787 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:48:46.552798 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:48:46.552808 | orchestrator | 2026-02-18 02:48:46.552819 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-18 02:48:46.552830 | orchestrator | Wednesday 18 February 2026 02:48:44 +0000 (0:00:00.182) 0:00:10.072 **** 2026-02-18 02:48:46.552840 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:48:46.552851 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:48:46.552861 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:48:46.552872 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:48:46.552882 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:48:46.552893 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:48:46.552903 | orchestrator | 2026-02-18 02:48:46.552914 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-18 02:48:46.552925 | orchestrator | Wednesday 18 February 2026 02:48:44 +0000 (0:00:00.191) 0:00:10.263 **** 2026-02-18 02:48:46.552935 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:48:46.552946 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:48:46.552956 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:48:46.552967 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:48:46.552977 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:48:46.552988 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:48:46.552998 | orchestrator | 2026-02-18 02:48:46.553009 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-18 02:48:46.553020 | orchestrator | Wednesday 18 February 2026 02:48:45 +0000 (0:00:00.588) 0:00:10.852 **** 2026-02-18 02:48:46.553030 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:48:46.553041 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:48:46.553052 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:48:46.553062 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:48:46.553073 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:48:46.553083 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:48:46.553094 | orchestrator | 2026-02-18 02:48:46.553105 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-18 02:48:46.553115 | orchestrator | Wednesday 18 February 2026 02:48:45 +0000 (0:00:00.208) 0:00:11.061 **** 2026-02-18 02:48:46.553126 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-18 02:48:46.553147 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-18 02:48:46.553158 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-18 02:48:46.553169 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 02:48:46.553179 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:48:46.553190 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:48:46.553201 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-18 02:48:46.553211 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:48:46.553222 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:48:46.553233 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:48:46.553243 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-18 02:48:46.553254 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:48:46.553264 | orchestrator | 2026-02-18 02:48:46.553275 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-18 02:48:46.553286 | orchestrator | Wednesday 18 February 2026 02:48:46 +0000 (0:00:00.687) 0:00:11.748 **** 2026-02-18 02:48:46.553303 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:48:46.553314 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:48:46.553325 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:48:46.553335 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:48:46.553346 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:48:46.553356 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:48:46.553421 | orchestrator | 2026-02-18 02:48:46.553441 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-18 02:48:46.553460 | orchestrator | Wednesday 18 February 2026 02:48:46 +0000 (0:00:00.167) 0:00:11.916 **** 2026-02-18 02:48:46.553481 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:48:46.553500 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:48:46.553518 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:48:46.553534 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:48:46.553555 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:48:47.883490 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:48:47.883586 | orchestrator | 2026-02-18 02:48:47.883598 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-18 02:48:47.883608 | orchestrator | Wednesday 18 February 2026 02:48:46 +0000 (0:00:00.186) 0:00:12.103 **** 2026-02-18 02:48:47.883616 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:48:47.883624 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:48:47.883632 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:48:47.883640 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:48:47.883648 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:48:47.883655 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:48:47.883663 | orchestrator | 2026-02-18 02:48:47.883671 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-18 02:48:47.883679 | orchestrator | Wednesday 18 February 2026 02:48:46 +0000 (0:00:00.158) 0:00:12.261 **** 2026-02-18 02:48:47.883687 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:48:47.883694 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:48:47.883719 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:48:47.883727 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:48:47.883735 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:48:47.883743 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:48:47.883751 | orchestrator | 2026-02-18 02:48:47.883758 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-18 02:48:47.883766 | orchestrator | Wednesday 18 February 2026 02:48:47 +0000 (0:00:00.640) 0:00:12.902 **** 2026-02-18 02:48:47.883774 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:48:47.883782 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:48:47.883790 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:48:47.883798 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:48:47.883806 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:48:47.883813 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:48:47.883821 | orchestrator | 2026-02-18 02:48:47.883829 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:48:47.883838 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 02:48:47.883848 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 02:48:47.883856 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 02:48:47.883864 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 02:48:47.883872 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 02:48:47.883897 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 02:48:47.883905 | orchestrator | 2026-02-18 02:48:47.883913 | orchestrator | 2026-02-18 02:48:47.883920 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:48:47.883928 | orchestrator | Wednesday 18 February 2026 02:48:47 +0000 (0:00:00.258) 0:00:13.160 **** 2026-02-18 02:48:47.883936 | orchestrator | =============================================================================== 2026-02-18 02:48:47.883945 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2026-02-18 02:48:47.883959 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-02-18 02:48:47.883973 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-02-18 02:48:47.883987 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2026-02-18 02:48:47.884000 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2026-02-18 02:48:47.884013 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2026-02-18 02:48:47.884026 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.77s 2026-02-18 02:48:47.884038 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-02-18 02:48:47.884049 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-02-18 02:48:47.884062 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-02-18 02:48:47.884075 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-02-18 02:48:47.884088 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2026-02-18 02:48:47.884101 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-02-18 02:48:47.884114 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-02-18 02:48:47.884127 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-02-18 02:48:47.884141 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-02-18 02:48:47.884154 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-18 02:48:47.884168 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-02-18 02:48:47.884181 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-02-18 02:48:48.303898 | orchestrator | + osism apply --environment custom facts 2026-02-18 02:48:50.313770 | orchestrator | 2026-02-18 02:48:50 | INFO  | Trying to run play facts in environment custom 2026-02-18 02:49:00.401348 | orchestrator | 2026-02-18 02:49:00 | INFO  | Task 579c23b3-5e9e-45ab-8f8e-f40c34160216 (facts) was prepared for execution. 2026-02-18 02:49:00.401476 | orchestrator | 2026-02-18 02:49:00 | INFO  | It takes a moment until task 579c23b3-5e9e-45ab-8f8e-f40c34160216 (facts) has been started and output is visible here. 2026-02-18 02:49:44.173877 | orchestrator | 2026-02-18 02:49:44.173985 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-18 02:49:44.174001 | orchestrator | 2026-02-18 02:49:44.174012 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-18 02:49:44.174116 | orchestrator | Wednesday 18 February 2026 02:49:04 +0000 (0:00:00.095) 0:00:00.095 **** 2026-02-18 02:49:44.174135 | orchestrator | ok: [testbed-manager] 2026-02-18 02:49:44.174153 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:49:44.174165 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:49:44.174175 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:49:44.174184 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:49:44.174194 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:49:44.174222 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:49:44.174232 | orchestrator | 2026-02-18 02:49:44.174242 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-18 02:49:44.174252 | orchestrator | Wednesday 18 February 2026 02:49:06 +0000 (0:00:01.498) 0:00:01.594 **** 2026-02-18 02:49:44.174261 | orchestrator | ok: [testbed-manager] 2026-02-18 02:49:44.174271 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:49:44.174281 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:49:44.174290 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:49:44.174300 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:49:44.174309 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:49:44.174319 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:49:44.174328 | orchestrator | 2026-02-18 02:49:44.174338 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-18 02:49:44.174347 | orchestrator | 2026-02-18 02:49:44.174357 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-18 02:49:44.174367 | orchestrator | Wednesday 18 February 2026 02:49:07 +0000 (0:00:01.216) 0:00:02.810 **** 2026-02-18 02:49:44.174376 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.174386 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.174396 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.174405 | orchestrator | 2026-02-18 02:49:44.174415 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-18 02:49:44.174427 | orchestrator | Wednesday 18 February 2026 02:49:07 +0000 (0:00:00.100) 0:00:02.910 **** 2026-02-18 02:49:44.174462 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.174481 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.174493 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.174504 | orchestrator | 2026-02-18 02:49:44.174515 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-18 02:49:44.174527 | orchestrator | Wednesday 18 February 2026 02:49:07 +0000 (0:00:00.239) 0:00:03.149 **** 2026-02-18 02:49:44.174538 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.174548 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.174559 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.174570 | orchestrator | 2026-02-18 02:49:44.174581 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-18 02:49:44.174593 | orchestrator | Wednesday 18 February 2026 02:49:08 +0000 (0:00:00.241) 0:00:03.391 **** 2026-02-18 02:49:44.174606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 02:49:44.174618 | orchestrator | 2026-02-18 02:49:44.174629 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-18 02:49:44.174640 | orchestrator | Wednesday 18 February 2026 02:49:08 +0000 (0:00:00.154) 0:00:03.546 **** 2026-02-18 02:49:44.174651 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.174662 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.174673 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.174684 | orchestrator | 2026-02-18 02:49:44.174695 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-18 02:49:44.174706 | orchestrator | Wednesday 18 February 2026 02:49:08 +0000 (0:00:00.455) 0:00:04.002 **** 2026-02-18 02:49:44.174716 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:49:44.174727 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:49:44.174739 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:49:44.174750 | orchestrator | 2026-02-18 02:49:44.174761 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-18 02:49:44.174773 | orchestrator | Wednesday 18 February 2026 02:49:08 +0000 (0:00:00.137) 0:00:04.139 **** 2026-02-18 02:49:44.174783 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:49:44.174793 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:49:44.174802 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:49:44.174812 | orchestrator | 2026-02-18 02:49:44.174821 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-18 02:49:44.174839 | orchestrator | Wednesday 18 February 2026 02:49:09 +0000 (0:00:01.042) 0:00:05.181 **** 2026-02-18 02:49:44.174849 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.174858 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.174868 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.174877 | orchestrator | 2026-02-18 02:49:44.174887 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-18 02:49:44.174897 | orchestrator | Wednesday 18 February 2026 02:49:10 +0000 (0:00:00.491) 0:00:05.673 **** 2026-02-18 02:49:44.174907 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:49:44.174916 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:49:44.174926 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:49:44.174936 | orchestrator | 2026-02-18 02:49:44.174945 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-18 02:49:44.174987 | orchestrator | Wednesday 18 February 2026 02:49:11 +0000 (0:00:01.108) 0:00:06.782 **** 2026-02-18 02:49:44.174998 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:49:44.175007 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:49:44.175017 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:49:44.175026 | orchestrator | 2026-02-18 02:49:44.175036 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-18 02:49:44.175045 | orchestrator | Wednesday 18 February 2026 02:49:27 +0000 (0:00:16.124) 0:00:22.906 **** 2026-02-18 02:49:44.175055 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:49:44.175064 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:49:44.175074 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:49:44.175083 | orchestrator | 2026-02-18 02:49:44.175093 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-18 02:49:44.175120 | orchestrator | Wednesday 18 February 2026 02:49:27 +0000 (0:00:00.115) 0:00:23.022 **** 2026-02-18 02:49:44.175130 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:49:44.175140 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:49:44.175149 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:49:44.175159 | orchestrator | 2026-02-18 02:49:44.175172 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-18 02:49:44.175182 | orchestrator | Wednesday 18 February 2026 02:49:35 +0000 (0:00:07.777) 0:00:30.800 **** 2026-02-18 02:49:44.175192 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.175201 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.175211 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.175220 | orchestrator | 2026-02-18 02:49:44.175230 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-18 02:49:44.175239 | orchestrator | Wednesday 18 February 2026 02:49:35 +0000 (0:00:00.427) 0:00:31.227 **** 2026-02-18 02:49:44.175249 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-18 02:49:44.175259 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-18 02:49:44.175268 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-18 02:49:44.175278 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-18 02:49:44.175287 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-18 02:49:44.175297 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-18 02:49:44.175306 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-18 02:49:44.175315 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-18 02:49:44.175325 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-18 02:49:44.175334 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-18 02:49:44.175344 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-18 02:49:44.175353 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-18 02:49:44.175363 | orchestrator | 2026-02-18 02:49:44.175372 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-18 02:49:44.175388 | orchestrator | Wednesday 18 February 2026 02:49:39 +0000 (0:00:03.402) 0:00:34.630 **** 2026-02-18 02:49:44.175398 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.175407 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.175417 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.175426 | orchestrator | 2026-02-18 02:49:44.175458 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-18 02:49:44.175470 | orchestrator | 2026-02-18 02:49:44.175480 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 02:49:44.175489 | orchestrator | Wednesday 18 February 2026 02:49:40 +0000 (0:00:01.225) 0:00:35.855 **** 2026-02-18 02:49:44.175499 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:49:44.175509 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:49:44.175518 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:49:44.175528 | orchestrator | ok: [testbed-manager] 2026-02-18 02:49:44.175538 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:49:44.175547 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:49:44.175557 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:49:44.175566 | orchestrator | 2026-02-18 02:49:44.175576 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:49:44.175586 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:49:44.175597 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:49:44.175608 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:49:44.175618 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:49:44.175628 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 02:49:44.175638 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 02:49:44.175648 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 02:49:44.175658 | orchestrator | 2026-02-18 02:49:44.175667 | orchestrator | 2026-02-18 02:49:44.175677 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:49:44.175687 | orchestrator | Wednesday 18 February 2026 02:49:44 +0000 (0:00:03.622) 0:00:39.478 **** 2026-02-18 02:49:44.175697 | orchestrator | =============================================================================== 2026-02-18 02:49:44.175707 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.12s 2026-02-18 02:49:44.175716 | orchestrator | Install required packages (Debian) -------------------------------------- 7.78s 2026-02-18 02:49:44.175726 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.62s 2026-02-18 02:49:44.175736 | orchestrator | Copy fact files --------------------------------------------------------- 3.40s 2026-02-18 02:49:44.175745 | orchestrator | Create custom facts directory ------------------------------------------- 1.50s 2026-02-18 02:49:44.175755 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2026-02-18 02:49:44.175771 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-02-18 02:49:44.454270 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2026-02-18 02:49:44.454353 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-02-18 02:49:44.454426 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-02-18 02:49:44.454485 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2026-02-18 02:49:44.454494 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-02-18 02:49:44.454500 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-18 02:49:44.454507 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2026-02-18 02:49:44.454514 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-02-18 02:49:44.454521 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-02-18 02:49:44.454528 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-02-18 02:49:44.454534 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-02-18 02:49:44.805478 | orchestrator | + osism apply bootstrap 2026-02-18 02:49:57.022527 | orchestrator | 2026-02-18 02:49:57 | INFO  | Task a7181f1f-6199-472c-b166-a263c1bb3a8b (bootstrap) was prepared for execution. 2026-02-18 02:49:57.022642 | orchestrator | 2026-02-18 02:49:57 | INFO  | It takes a moment until task a7181f1f-6199-472c-b166-a263c1bb3a8b (bootstrap) has been started and output is visible here. 2026-02-18 02:50:13.770399 | orchestrator | 2026-02-18 02:50:13.770552 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-18 02:50:13.770572 | orchestrator | 2026-02-18 02:50:13.770583 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-18 02:50:13.770595 | orchestrator | Wednesday 18 February 2026 02:50:01 +0000 (0:00:00.161) 0:00:00.161 **** 2026-02-18 02:50:13.770606 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:13.770618 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:13.770628 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:13.770639 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:13.770650 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:13.770661 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:13.770670 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:13.770679 | orchestrator | 2026-02-18 02:50:13.770690 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-18 02:50:13.770701 | orchestrator | 2026-02-18 02:50:13.770711 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 02:50:13.770722 | orchestrator | Wednesday 18 February 2026 02:50:01 +0000 (0:00:00.258) 0:00:00.420 **** 2026-02-18 02:50:13.770732 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:13.770741 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:13.770751 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:13.770761 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:13.770772 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:13.770783 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:13.770793 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:13.770804 | orchestrator | 2026-02-18 02:50:13.770814 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-18 02:50:13.770824 | orchestrator | 2026-02-18 02:50:13.770835 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 02:50:13.770846 | orchestrator | Wednesday 18 February 2026 02:50:05 +0000 (0:00:03.550) 0:00:03.970 **** 2026-02-18 02:50:13.770858 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-18 02:50:13.770869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-18 02:50:13.770881 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-18 02:50:13.770891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-18 02:50:13.770901 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-18 02:50:13.770912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-18 02:50:13.770924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-18 02:50:13.770935 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-18 02:50:13.770945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 02:50:13.770983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-18 02:50:13.770996 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:13.771006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-18 02:50:13.771016 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 02:50:13.771025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 02:50:13.771035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-18 02:50:13.771046 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 02:50:13.771055 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 02:50:13.771065 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 02:50:13.771075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 02:50:13.771085 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-18 02:50:13.771095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 02:50:13.771104 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 02:50:13.771114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 02:50:13.771123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 02:50:13.771133 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 02:50:13.771142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 02:50:13.771152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 02:50:13.771162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 02:50:13.771171 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 02:50:13.771181 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:50:13.771191 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-18 02:50:13.771200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 02:50:13.771210 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 02:50:13.771220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 02:50:13.771230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 02:50:13.771239 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 02:50:13.771249 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 02:50:13.771259 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 02:50:13.771268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 02:50:13.771278 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 02:50:13.771287 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:50:13.771296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 02:50:13.771306 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:50:13.771315 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 02:50:13.771325 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 02:50:13.771335 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 02:50:13.771363 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 02:50:13.771373 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:50:13.771382 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 02:50:13.771392 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 02:50:13.771401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 02:50:13.771410 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 02:50:13.771420 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:50:13.771430 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 02:50:13.771445 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 02:50:13.771472 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:50:13.771501 | orchestrator | 2026-02-18 02:50:13.771511 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-18 02:50:13.771521 | orchestrator | 2026-02-18 02:50:13.771531 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-18 02:50:13.771540 | orchestrator | Wednesday 18 February 2026 02:50:05 +0000 (0:00:00.560) 0:00:04.531 **** 2026-02-18 02:50:13.771550 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:13.771561 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:13.771571 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:13.771582 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:13.771592 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:13.771602 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:13.771610 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:13.771620 | orchestrator | 2026-02-18 02:50:13.771629 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-18 02:50:13.771638 | orchestrator | Wednesday 18 February 2026 02:50:07 +0000 (0:00:01.235) 0:00:05.766 **** 2026-02-18 02:50:13.771647 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:13.771656 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:13.771665 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:13.771674 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:13.771684 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:13.771693 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:13.771702 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:13.771711 | orchestrator | 2026-02-18 02:50:13.771721 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-18 02:50:13.771730 | orchestrator | Wednesday 18 February 2026 02:50:08 +0000 (0:00:01.318) 0:00:07.085 **** 2026-02-18 02:50:13.771740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:50:13.771752 | orchestrator | 2026-02-18 02:50:13.771761 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-18 02:50:13.771772 | orchestrator | Wednesday 18 February 2026 02:50:08 +0000 (0:00:00.332) 0:00:07.417 **** 2026-02-18 02:50:13.771781 | orchestrator | changed: [testbed-manager] 2026-02-18 02:50:13.771791 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:50:13.771801 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:50:13.771810 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:50:13.771819 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:50:13.771829 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:50:13.771840 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:50:13.771851 | orchestrator | 2026-02-18 02:50:13.771861 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-18 02:50:13.771871 | orchestrator | Wednesday 18 February 2026 02:50:11 +0000 (0:00:02.334) 0:00:09.752 **** 2026-02-18 02:50:13.771881 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:13.771892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:50:13.771904 | orchestrator | 2026-02-18 02:50:13.771914 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-18 02:50:13.771925 | orchestrator | Wednesday 18 February 2026 02:50:11 +0000 (0:00:00.295) 0:00:10.047 **** 2026-02-18 02:50:13.771936 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:50:13.771946 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:50:13.771957 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:50:13.771967 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:50:13.771977 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:50:13.771988 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:50:13.772009 | orchestrator | 2026-02-18 02:50:13.772026 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-18 02:50:13.772036 | orchestrator | Wednesday 18 February 2026 02:50:12 +0000 (0:00:01.040) 0:00:11.088 **** 2026-02-18 02:50:13.772046 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:13.772055 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:50:13.772065 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:50:13.772076 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:50:13.772086 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:50:13.772096 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:50:13.772106 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:50:13.772114 | orchestrator | 2026-02-18 02:50:13.772121 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-18 02:50:13.772127 | orchestrator | Wednesday 18 February 2026 02:50:13 +0000 (0:00:00.595) 0:00:11.684 **** 2026-02-18 02:50:13.772133 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:50:13.772140 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:50:13.772146 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:50:13.772152 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:50:13.772158 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:50:13.772164 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:50:13.772171 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:13.772177 | orchestrator | 2026-02-18 02:50:13.772183 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-18 02:50:13.772191 | orchestrator | Wednesday 18 February 2026 02:50:13 +0000 (0:00:00.500) 0:00:12.184 **** 2026-02-18 02:50:13.772197 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:13.772203 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:50:13.772220 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:50:26.223971 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:50:26.224098 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:50:26.224120 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:50:26.224135 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:50:26.224151 | orchestrator | 2026-02-18 02:50:26.224166 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-18 02:50:26.224182 | orchestrator | Wednesday 18 February 2026 02:50:13 +0000 (0:00:00.276) 0:00:12.460 **** 2026-02-18 02:50:26.224196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:50:26.224207 | orchestrator | 2026-02-18 02:50:26.224215 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-18 02:50:26.224224 | orchestrator | Wednesday 18 February 2026 02:50:14 +0000 (0:00:00.337) 0:00:12.797 **** 2026-02-18 02:50:26.224233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:50:26.224241 | orchestrator | 2026-02-18 02:50:26.224248 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-18 02:50:26.224256 | orchestrator | Wednesday 18 February 2026 02:50:14 +0000 (0:00:00.346) 0:00:13.143 **** 2026-02-18 02:50:26.224264 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.224273 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.224281 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.224289 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.224297 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.224305 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.224313 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.224320 | orchestrator | 2026-02-18 02:50:26.224328 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-18 02:50:26.224336 | orchestrator | Wednesday 18 February 2026 02:50:16 +0000 (0:00:01.458) 0:00:14.602 **** 2026-02-18 02:50:26.224367 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:26.224376 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:50:26.224406 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:50:26.224423 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:50:26.224431 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:50:26.224438 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:50:26.224446 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:50:26.224463 | orchestrator | 2026-02-18 02:50:26.224472 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-18 02:50:26.224481 | orchestrator | Wednesday 18 February 2026 02:50:16 +0000 (0:00:00.325) 0:00:14.928 **** 2026-02-18 02:50:26.224509 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.224518 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.224527 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.224536 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.224545 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.224553 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.224562 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.224571 | orchestrator | 2026-02-18 02:50:26.224580 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-18 02:50:26.224588 | orchestrator | Wednesday 18 February 2026 02:50:16 +0000 (0:00:00.526) 0:00:15.454 **** 2026-02-18 02:50:26.224597 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:26.224606 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:50:26.224615 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:50:26.224624 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:50:26.224633 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:50:26.224642 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:50:26.224651 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:50:26.224660 | orchestrator | 2026-02-18 02:50:26.224669 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-18 02:50:26.224680 | orchestrator | Wednesday 18 February 2026 02:50:17 +0000 (0:00:00.286) 0:00:15.741 **** 2026-02-18 02:50:26.224689 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.224697 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:50:26.224706 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:50:26.224715 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:50:26.224724 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:50:26.224733 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:50:26.224755 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:50:26.224764 | orchestrator | 2026-02-18 02:50:26.224774 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-18 02:50:26.224783 | orchestrator | Wednesday 18 February 2026 02:50:17 +0000 (0:00:00.540) 0:00:16.282 **** 2026-02-18 02:50:26.224792 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.224801 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:50:26.224810 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:50:26.224820 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:50:26.224829 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:50:26.224838 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:50:26.224845 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:50:26.224853 | orchestrator | 2026-02-18 02:50:26.224861 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-18 02:50:26.224869 | orchestrator | Wednesday 18 February 2026 02:50:18 +0000 (0:00:01.160) 0:00:17.442 **** 2026-02-18 02:50:26.224877 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.224885 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.224893 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.224900 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.224908 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.224916 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.224924 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.224931 | orchestrator | 2026-02-18 02:50:26.224939 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-18 02:50:26.224953 | orchestrator | Wednesday 18 February 2026 02:50:19 +0000 (0:00:01.060) 0:00:18.503 **** 2026-02-18 02:50:26.224978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:50:26.224986 | orchestrator | 2026-02-18 02:50:26.224994 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-18 02:50:26.225002 | orchestrator | Wednesday 18 February 2026 02:50:20 +0000 (0:00:00.313) 0:00:18.817 **** 2026-02-18 02:50:26.225010 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:26.225017 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:50:26.225025 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:50:26.225033 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:50:26.225040 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:50:26.225048 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:50:26.225056 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:50:26.225063 | orchestrator | 2026-02-18 02:50:26.225071 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-18 02:50:26.225079 | orchestrator | Wednesday 18 February 2026 02:50:21 +0000 (0:00:01.310) 0:00:20.127 **** 2026-02-18 02:50:26.225087 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.225095 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.225102 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.225110 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.225118 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.225125 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.225133 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.225140 | orchestrator | 2026-02-18 02:50:26.225148 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-18 02:50:26.225156 | orchestrator | Wednesday 18 February 2026 02:50:21 +0000 (0:00:00.237) 0:00:20.365 **** 2026-02-18 02:50:26.225164 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.225171 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.225179 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.225186 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.225194 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.225202 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.225209 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.225217 | orchestrator | 2026-02-18 02:50:26.225225 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-18 02:50:26.225233 | orchestrator | Wednesday 18 February 2026 02:50:22 +0000 (0:00:00.258) 0:00:20.623 **** 2026-02-18 02:50:26.225240 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.225248 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.225256 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.225263 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.225271 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.225278 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.225286 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.225294 | orchestrator | 2026-02-18 02:50:26.225301 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-18 02:50:26.225309 | orchestrator | Wednesday 18 February 2026 02:50:22 +0000 (0:00:00.270) 0:00:20.894 **** 2026-02-18 02:50:26.225318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:50:26.225327 | orchestrator | 2026-02-18 02:50:26.225335 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-18 02:50:26.225343 | orchestrator | Wednesday 18 February 2026 02:50:22 +0000 (0:00:00.309) 0:00:21.203 **** 2026-02-18 02:50:26.225350 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.225358 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.225371 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.225379 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.225386 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.225394 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.225402 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.225410 | orchestrator | 2026-02-18 02:50:26.225417 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-18 02:50:26.225425 | orchestrator | Wednesday 18 February 2026 02:50:23 +0000 (0:00:00.560) 0:00:21.763 **** 2026-02-18 02:50:26.225433 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:50:26.225441 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:50:26.225449 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:50:26.225456 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:50:26.225464 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:50:26.225472 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:50:26.225480 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:50:26.225487 | orchestrator | 2026-02-18 02:50:26.225562 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-18 02:50:26.225570 | orchestrator | Wednesday 18 February 2026 02:50:23 +0000 (0:00:00.251) 0:00:22.015 **** 2026-02-18 02:50:26.225578 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.225586 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.225594 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.225601 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.225609 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:50:26.225617 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:50:26.225624 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:50:26.225632 | orchestrator | 2026-02-18 02:50:26.225640 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-18 02:50:26.225648 | orchestrator | Wednesday 18 February 2026 02:50:24 +0000 (0:00:01.080) 0:00:23.095 **** 2026-02-18 02:50:26.225655 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.225663 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.225671 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.225678 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.225686 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:50:26.225694 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:50:26.225701 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:50:26.225709 | orchestrator | 2026-02-18 02:50:26.225717 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-18 02:50:26.225725 | orchestrator | Wednesday 18 February 2026 02:50:25 +0000 (0:00:00.557) 0:00:23.653 **** 2026-02-18 02:50:26.225733 | orchestrator | ok: [testbed-manager] 2026-02-18 02:50:26.225740 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:50:26.225748 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:50:26.225764 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:50:26.225779 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:51:08.448154 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:51:08.448271 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:51:08.448287 | orchestrator | 2026-02-18 02:51:08.448300 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-18 02:51:08.448313 | orchestrator | Wednesday 18 February 2026 02:50:26 +0000 (0:00:01.143) 0:00:24.796 **** 2026-02-18 02:51:08.448324 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.448336 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.448347 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.448358 | orchestrator | changed: [testbed-manager] 2026-02-18 02:51:08.448369 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:51:08.448380 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:51:08.448391 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:51:08.448402 | orchestrator | 2026-02-18 02:51:08.448413 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-18 02:51:08.448424 | orchestrator | Wednesday 18 February 2026 02:50:41 +0000 (0:00:15.204) 0:00:40.001 **** 2026-02-18 02:51:08.448435 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.448468 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.448479 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.448490 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.448501 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.448511 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.448522 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.448533 | orchestrator | 2026-02-18 02:51:08.448544 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-18 02:51:08.448587 | orchestrator | Wednesday 18 February 2026 02:50:41 +0000 (0:00:00.252) 0:00:40.253 **** 2026-02-18 02:51:08.448606 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.448624 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.448641 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.448660 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.448679 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.448697 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.448711 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.448723 | orchestrator | 2026-02-18 02:51:08.448737 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-18 02:51:08.448750 | orchestrator | Wednesday 18 February 2026 02:50:41 +0000 (0:00:00.248) 0:00:40.501 **** 2026-02-18 02:51:08.448763 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.448775 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.448787 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.448800 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.448811 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.448824 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.448838 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.448851 | orchestrator | 2026-02-18 02:51:08.448864 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-18 02:51:08.448876 | orchestrator | Wednesday 18 February 2026 02:50:42 +0000 (0:00:00.269) 0:00:40.771 **** 2026-02-18 02:51:08.448890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:51:08.448905 | orchestrator | 2026-02-18 02:51:08.448919 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-18 02:51:08.448933 | orchestrator | Wednesday 18 February 2026 02:50:42 +0000 (0:00:00.382) 0:00:41.153 **** 2026-02-18 02:51:08.448953 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.448971 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.448989 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.449007 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.449025 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.449043 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.449061 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.449078 | orchestrator | 2026-02-18 02:51:08.449096 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-18 02:51:08.449114 | orchestrator | Wednesday 18 February 2026 02:50:44 +0000 (0:00:01.917) 0:00:43.070 **** 2026-02-18 02:51:08.449131 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:51:08.449149 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:51:08.449168 | orchestrator | changed: [testbed-manager] 2026-02-18 02:51:08.449186 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:51:08.449204 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:51:08.449223 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:51:08.449243 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:51:08.449261 | orchestrator | 2026-02-18 02:51:08.449280 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-18 02:51:08.449312 | orchestrator | Wednesday 18 February 2026 02:50:45 +0000 (0:00:01.068) 0:00:44.139 **** 2026-02-18 02:51:08.449324 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.449335 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.449346 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.449368 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.449379 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.449390 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.449400 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.449411 | orchestrator | 2026-02-18 02:51:08.449422 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-18 02:51:08.449433 | orchestrator | Wednesday 18 February 2026 02:50:46 +0000 (0:00:00.821) 0:00:44.960 **** 2026-02-18 02:51:08.449445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:51:08.449457 | orchestrator | 2026-02-18 02:51:08.449468 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-18 02:51:08.449480 | orchestrator | Wednesday 18 February 2026 02:50:46 +0000 (0:00:00.370) 0:00:45.331 **** 2026-02-18 02:51:08.449491 | orchestrator | changed: [testbed-manager] 2026-02-18 02:51:08.449502 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:51:08.449512 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:51:08.449523 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:51:08.449534 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:51:08.449545 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:51:08.449578 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:51:08.449589 | orchestrator | 2026-02-18 02:51:08.449620 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-18 02:51:08.449632 | orchestrator | Wednesday 18 February 2026 02:50:47 +0000 (0:00:01.064) 0:00:46.395 **** 2026-02-18 02:51:08.449643 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:51:08.449653 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:51:08.449664 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:51:08.449675 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:51:08.449685 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:51:08.449696 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:51:08.449707 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:51:08.449717 | orchestrator | 2026-02-18 02:51:08.449728 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-18 02:51:08.449739 | orchestrator | Wednesday 18 February 2026 02:50:48 +0000 (0:00:00.259) 0:00:46.655 **** 2026-02-18 02:51:08.449750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:51:08.449761 | orchestrator | 2026-02-18 02:51:08.449772 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-18 02:51:08.449783 | orchestrator | Wednesday 18 February 2026 02:50:48 +0000 (0:00:00.358) 0:00:47.013 **** 2026-02-18 02:51:08.449793 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.449804 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.449815 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.449825 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.449836 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.449847 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.449857 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.449868 | orchestrator | 2026-02-18 02:51:08.449879 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-18 02:51:08.449890 | orchestrator | Wednesday 18 February 2026 02:50:50 +0000 (0:00:01.792) 0:00:48.805 **** 2026-02-18 02:51:08.449900 | orchestrator | changed: [testbed-manager] 2026-02-18 02:51:08.449911 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:51:08.449922 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:51:08.449932 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:51:08.449943 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:51:08.449954 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:51:08.449964 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:51:08.449982 | orchestrator | 2026-02-18 02:51:08.449994 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-18 02:51:08.450004 | orchestrator | Wednesday 18 February 2026 02:50:51 +0000 (0:00:01.215) 0:00:50.021 **** 2026-02-18 02:51:08.450069 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:51:08.450084 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:51:08.450095 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:51:08.450105 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:51:08.450116 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:51:08.450127 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:51:08.450137 | orchestrator | changed: [testbed-manager] 2026-02-18 02:51:08.450148 | orchestrator | 2026-02-18 02:51:08.450159 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-18 02:51:08.450170 | orchestrator | Wednesday 18 February 2026 02:51:05 +0000 (0:00:13.773) 0:01:03.794 **** 2026-02-18 02:51:08.450181 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.450191 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.450202 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.450213 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.450223 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.450234 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.450245 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.450255 | orchestrator | 2026-02-18 02:51:08.450266 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-18 02:51:08.450277 | orchestrator | Wednesday 18 February 2026 02:51:06 +0000 (0:00:01.461) 0:01:05.255 **** 2026-02-18 02:51:08.450287 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.450299 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.450318 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.450337 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.450357 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.450376 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.450395 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.450415 | orchestrator | 2026-02-18 02:51:08.450435 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-18 02:51:08.450455 | orchestrator | Wednesday 18 February 2026 02:51:07 +0000 (0:00:00.884) 0:01:06.139 **** 2026-02-18 02:51:08.450493 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.450514 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.450527 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.450538 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.450602 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.450617 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.450628 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.450638 | orchestrator | 2026-02-18 02:51:08.450649 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-18 02:51:08.450660 | orchestrator | Wednesday 18 February 2026 02:51:07 +0000 (0:00:00.285) 0:01:06.424 **** 2026-02-18 02:51:08.450671 | orchestrator | ok: [testbed-manager] 2026-02-18 02:51:08.450681 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:51:08.450692 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:51:08.450703 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:51:08.450713 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:51:08.450724 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:51:08.450734 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:51:08.450745 | orchestrator | 2026-02-18 02:51:08.450755 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-18 02:51:08.450766 | orchestrator | Wednesday 18 February 2026 02:51:08 +0000 (0:00:00.253) 0:01:06.677 **** 2026-02-18 02:51:08.450777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:51:08.450789 | orchestrator | 2026-02-18 02:51:08.450811 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-18 02:53:29.098173 | orchestrator | Wednesday 18 February 2026 02:51:08 +0000 (0:00:00.340) 0:01:07.018 **** 2026-02-18 02:53:29.098277 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:29.098292 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:29.098302 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:29.098327 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:29.098338 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:29.098357 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:29.098367 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:29.098382 | orchestrator | 2026-02-18 02:53:29.098399 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-18 02:53:29.098416 | orchestrator | Wednesday 18 February 2026 02:51:10 +0000 (0:00:01.615) 0:01:08.633 **** 2026-02-18 02:53:29.098432 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:53:29.098457 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:53:29.098477 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:53:29.098493 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:53:29.098509 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:53:29.098524 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:53:29.098540 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:29.098556 | orchestrator | 2026-02-18 02:53:29.098573 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-18 02:53:29.098590 | orchestrator | Wednesday 18 February 2026 02:51:10 +0000 (0:00:00.541) 0:01:09.175 **** 2026-02-18 02:53:29.098607 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:29.098624 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:29.098642 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:29.098661 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:29.098680 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:29.098698 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:29.098718 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:29.098736 | orchestrator | 2026-02-18 02:53:29.098786 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-18 02:53:29.098806 | orchestrator | Wednesday 18 February 2026 02:51:10 +0000 (0:00:00.317) 0:01:09.492 **** 2026-02-18 02:53:29.098825 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:29.098841 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:29.098853 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:29.098864 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:29.098875 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:29.098886 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:29.098897 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:29.098909 | orchestrator | 2026-02-18 02:53:29.098920 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-18 02:53:29.098931 | orchestrator | Wednesday 18 February 2026 02:51:11 +0000 (0:00:01.072) 0:01:10.564 **** 2026-02-18 02:53:29.098942 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:53:29.098953 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:53:29.098964 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:53:29.098975 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:53:29.098986 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:53:29.098996 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:53:29.099007 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:29.099018 | orchestrator | 2026-02-18 02:53:29.099034 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-18 02:53:29.099051 | orchestrator | Wednesday 18 February 2026 02:51:13 +0000 (0:00:01.530) 0:01:12.095 **** 2026-02-18 02:53:29.099069 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:29.099087 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:29.099104 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:29.099120 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:29.099138 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:29.099156 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:29.099175 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:29.099193 | orchestrator | 2026-02-18 02:53:29.099210 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-18 02:53:29.099264 | orchestrator | Wednesday 18 February 2026 02:51:16 +0000 (0:00:02.539) 0:01:14.635 **** 2026-02-18 02:53:29.099284 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:29.099304 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:29.099321 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:29.099339 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:29.099355 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:29.099369 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:29.099386 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:29.099406 | orchestrator | 2026-02-18 02:53:29.099426 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-18 02:53:29.099445 | orchestrator | Wednesday 18 February 2026 02:51:50 +0000 (0:00:34.067) 0:01:48.702 **** 2026-02-18 02:53:29.099464 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:29.099483 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:53:29.099502 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:53:29.099521 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:53:29.099540 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:53:29.099558 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:53:29.099575 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:53:29.099591 | orchestrator | 2026-02-18 02:53:29.099610 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-18 02:53:29.099628 | orchestrator | Wednesday 18 February 2026 02:53:11 +0000 (0:01:21.263) 0:03:09.966 **** 2026-02-18 02:53:29.099648 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:29.099668 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:29.099685 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:29.099704 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:29.099716 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:29.099727 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:29.099738 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:29.099779 | orchestrator | 2026-02-18 02:53:29.099791 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-18 02:53:29.099802 | orchestrator | Wednesday 18 February 2026 02:53:13 +0000 (0:00:01.975) 0:03:11.941 **** 2026-02-18 02:53:29.099813 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:29.099824 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:29.099834 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:29.099845 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:29.099855 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:29.099866 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:29.099877 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:29.099887 | orchestrator | 2026-02-18 02:53:29.099898 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-18 02:53:29.099909 | orchestrator | Wednesday 18 February 2026 02:53:26 +0000 (0:00:13.425) 0:03:25.367 **** 2026-02-18 02:53:29.099960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-18 02:53:29.099997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-18 02:53:29.100024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-18 02:53:29.100038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-18 02:53:29.100050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-18 02:53:29.100061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-18 02:53:29.100072 | orchestrator | 2026-02-18 02:53:29.100083 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-18 02:53:29.100094 | orchestrator | Wednesday 18 February 2026 02:53:27 +0000 (0:00:00.473) 0:03:25.841 **** 2026-02-18 02:53:29.100105 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-18 02:53:29.100115 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:29.100127 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-18 02:53:29.100137 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:53:29.100148 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-18 02:53:29.100159 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:53:29.100175 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-18 02:53:29.100186 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:53:29.100197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 02:53:29.100208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 02:53:29.100219 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 02:53:29.100229 | orchestrator | 2026-02-18 02:53:29.100240 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-18 02:53:29.100251 | orchestrator | Wednesday 18 February 2026 02:53:28 +0000 (0:00:01.736) 0:03:27.577 **** 2026-02-18 02:53:29.100261 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-18 02:53:29.100274 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-18 02:53:29.100285 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-18 02:53:29.100296 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-18 02:53:29.100306 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-18 02:53:29.100324 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-18 02:53:35.612265 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-18 02:53:35.612369 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-18 02:53:35.612408 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-18 02:53:35.612420 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-18 02:53:35.612432 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-18 02:53:35.612443 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-18 02:53:35.612454 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:35.612467 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-18 02:53:35.612478 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-18 02:53:35.612488 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-18 02:53:35.612499 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-18 02:53:35.612510 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-18 02:53:35.612521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-18 02:53:35.612532 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-18 02:53:35.612542 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-18 02:53:35.612553 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-18 02:53:35.612564 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-18 02:53:35.612575 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-18 02:53:35.612585 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:53:35.612596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-18 02:53:35.612607 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-18 02:53:35.612617 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-18 02:53:35.612628 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-18 02:53:35.612638 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-18 02:53:35.612649 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-18 02:53:35.612659 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-18 02:53:35.612670 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-18 02:53:35.612681 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-18 02:53:35.612691 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-18 02:53:35.612702 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-18 02:53:35.612726 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-18 02:53:35.612738 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-18 02:53:35.612748 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:53:35.612806 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-18 02:53:35.612819 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-18 02:53:35.612832 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-18 02:53:35.612856 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-18 02:53:35.612868 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:53:35.612880 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-18 02:53:35.612893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-18 02:53:35.612905 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-18 02:53:35.612918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-18 02:53:35.612930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-18 02:53:35.612958 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-18 02:53:35.612969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-18 02:53:35.612980 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-18 02:53:35.612991 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-18 02:53:35.613001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-18 02:53:35.613012 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-18 02:53:35.613023 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-18 02:53:35.613033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-18 02:53:35.613044 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-18 02:53:35.613055 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-18 02:53:35.613065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-18 02:53:35.613076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-18 02:53:35.613087 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-18 02:53:35.613098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-18 02:53:35.613108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-18 02:53:35.613119 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-18 02:53:35.613129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-18 02:53:35.613140 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-18 02:53:35.613151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-18 02:53:35.613161 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-18 02:53:35.613172 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-18 02:53:35.613183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-18 02:53:35.613193 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-18 02:53:35.613204 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-18 02:53:35.613215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-18 02:53:35.613233 | orchestrator | 2026-02-18 02:53:35.613245 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-18 02:53:35.613255 | orchestrator | Wednesday 18 February 2026 02:53:34 +0000 (0:00:05.551) 0:03:33.129 **** 2026-02-18 02:53:35.613266 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-18 02:53:35.613277 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-18 02:53:35.613288 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-18 02:53:35.613298 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-18 02:53:35.613315 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-18 02:53:35.613326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-18 02:53:35.613336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-18 02:53:35.613347 | orchestrator | 2026-02-18 02:53:35.613358 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-18 02:53:35.613368 | orchestrator | Wednesday 18 February 2026 02:53:35 +0000 (0:00:00.585) 0:03:33.715 **** 2026-02-18 02:53:35.613379 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:35.613390 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:35.613400 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:35.613411 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:53:35.613422 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:35.613433 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:53:35.613443 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:35.613454 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:53:35.613465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-18 02:53:35.613475 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-18 02:53:35.613492 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-18 02:53:48.844568 | orchestrator | 2026-02-18 02:53:48.844671 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-18 02:53:48.844700 | orchestrator | Wednesday 18 February 2026 02:53:35 +0000 (0:00:00.466) 0:03:34.181 **** 2026-02-18 02:53:48.844709 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:48.844726 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:48.844735 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:48.844744 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:53:48.844753 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:48.844762 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-18 02:53:48.844826 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:53:48.844836 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:53:48.844844 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-18 02:53:48.844852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-18 02:53:48.844859 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-18 02:53:48.844867 | orchestrator | 2026-02-18 02:53:48.844874 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-18 02:53:48.844905 | orchestrator | Wednesday 18 February 2026 02:53:36 +0000 (0:00:00.626) 0:03:34.808 **** 2026-02-18 02:53:48.844914 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-18 02:53:48.844921 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:48.844929 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-18 02:53:48.844936 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-18 02:53:48.844944 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:53:48.844951 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:53:48.844958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-18 02:53:48.844966 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:53:48.844974 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-18 02:53:48.844981 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-18 02:53:48.844989 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-18 02:53:48.844997 | orchestrator | 2026-02-18 02:53:48.845004 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-18 02:53:48.845011 | orchestrator | Wednesday 18 February 2026 02:53:36 +0000 (0:00:00.608) 0:03:35.417 **** 2026-02-18 02:53:48.845019 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:48.845026 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:53:48.845033 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:53:48.845039 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:53:48.845046 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:53:48.845053 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:53:48.845060 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:53:48.845067 | orchestrator | 2026-02-18 02:53:48.845074 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-18 02:53:48.845081 | orchestrator | Wednesday 18 February 2026 02:53:37 +0000 (0:00:00.347) 0:03:35.764 **** 2026-02-18 02:53:48.845088 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:48.845097 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:48.845104 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:48.845111 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:48.845118 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:48.845125 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:48.845132 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:48.845138 | orchestrator | 2026-02-18 02:53:48.845145 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-18 02:53:48.845151 | orchestrator | Wednesday 18 February 2026 02:53:42 +0000 (0:00:05.587) 0:03:41.352 **** 2026-02-18 02:53:48.845158 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-18 02:53:48.845164 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:48.845171 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-18 02:53:48.845178 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-18 02:53:48.845185 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:53:48.845192 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-18 02:53:48.845200 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:53:48.845207 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-18 02:53:48.845216 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:53:48.845223 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-18 02:53:48.845246 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:53:48.845255 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:53:48.845263 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-18 02:53:48.845271 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:53:48.845288 | orchestrator | 2026-02-18 02:53:48.845296 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-18 02:53:48.845304 | orchestrator | Wednesday 18 February 2026 02:53:43 +0000 (0:00:00.356) 0:03:41.709 **** 2026-02-18 02:53:48.845311 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-18 02:53:48.845318 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-18 02:53:48.845326 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-18 02:53:48.845353 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-18 02:53:48.845362 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-18 02:53:48.845369 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-18 02:53:48.845377 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-18 02:53:48.845383 | orchestrator | 2026-02-18 02:53:48.845390 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-18 02:53:48.845396 | orchestrator | Wednesday 18 February 2026 02:53:44 +0000 (0:00:01.113) 0:03:42.822 **** 2026-02-18 02:53:48.845406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:53:48.845416 | orchestrator | 2026-02-18 02:53:48.845424 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-18 02:53:48.845432 | orchestrator | Wednesday 18 February 2026 02:53:44 +0000 (0:00:00.552) 0:03:43.375 **** 2026-02-18 02:53:48.845439 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:48.845446 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:48.845454 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:48.845461 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:48.845468 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:48.845475 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:48.845483 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:48.845490 | orchestrator | 2026-02-18 02:53:48.845498 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-18 02:53:48.845506 | orchestrator | Wednesday 18 February 2026 02:53:45 +0000 (0:00:01.165) 0:03:44.541 **** 2026-02-18 02:53:48.845513 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:48.845520 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:48.845528 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:48.845535 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:48.845542 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:48.845549 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:48.845557 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:48.845564 | orchestrator | 2026-02-18 02:53:48.845571 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-18 02:53:48.845578 | orchestrator | Wednesday 18 February 2026 02:53:46 +0000 (0:00:00.584) 0:03:45.125 **** 2026-02-18 02:53:48.845585 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:48.845593 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:53:48.845600 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:53:48.845607 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:53:48.845615 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:53:48.845622 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:53:48.845629 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:53:48.845637 | orchestrator | 2026-02-18 02:53:48.845644 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-18 02:53:48.845652 | orchestrator | Wednesday 18 February 2026 02:53:47 +0000 (0:00:00.640) 0:03:45.766 **** 2026-02-18 02:53:48.845659 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:48.845667 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:48.845674 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:48.845681 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:48.845688 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:48.845695 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:48.845703 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:48.845710 | orchestrator | 2026-02-18 02:53:48.845717 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-18 02:53:48.845734 | orchestrator | Wednesday 18 February 2026 02:53:47 +0000 (0:00:00.557) 0:03:46.323 **** 2026-02-18 02:53:48.845751 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771381797.57783, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:48.845762 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771381805.3446016, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:48.845822 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771381779.3239536, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:48.845857 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771381794.0756261, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005591 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771381807.2052345, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005702 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771381799.5270984, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005719 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771381794.140574, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005754 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005850 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005865 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005876 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005916 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005929 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005940 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 02:53:54.005961 | orchestrator | 2026-02-18 02:53:54.005975 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-18 02:53:54.005988 | orchestrator | Wednesday 18 February 2026 02:53:48 +0000 (0:00:01.090) 0:03:47.414 **** 2026-02-18 02:53:54.005999 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:54.006011 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:53:54.006088 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:53:54.006100 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:53:54.006111 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:53:54.006125 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:53:54.006137 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:53:54.006150 | orchestrator | 2026-02-18 02:53:54.006198 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-18 02:53:54.006213 | orchestrator | Wednesday 18 February 2026 02:53:50 +0000 (0:00:01.191) 0:03:48.605 **** 2026-02-18 02:53:54.006226 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:54.006237 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:53:54.006248 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:53:54.006258 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:53:54.006269 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:53:54.006280 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:53:54.006290 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:53:54.006301 | orchestrator | 2026-02-18 02:53:54.006318 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-18 02:53:54.006330 | orchestrator | Wednesday 18 February 2026 02:53:51 +0000 (0:00:01.216) 0:03:49.822 **** 2026-02-18 02:53:54.006340 | orchestrator | changed: [testbed-manager] 2026-02-18 02:53:54.006351 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:53:54.006361 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:53:54.006372 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:53:54.006382 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:53:54.006393 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:53:54.006404 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:53:54.006414 | orchestrator | 2026-02-18 02:53:54.006425 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-18 02:53:54.006436 | orchestrator | Wednesday 18 February 2026 02:53:52 +0000 (0:00:01.228) 0:03:51.051 **** 2026-02-18 02:53:54.006447 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:53:54.006458 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:53:54.006468 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:53:54.006479 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:53:54.006489 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:53:54.006500 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:53:54.006510 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:53:54.006521 | orchestrator | 2026-02-18 02:53:54.006532 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-18 02:53:54.006543 | orchestrator | Wednesday 18 February 2026 02:53:52 +0000 (0:00:00.312) 0:03:51.364 **** 2026-02-18 02:53:54.006553 | orchestrator | ok: [testbed-manager] 2026-02-18 02:53:54.006565 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:53:54.006576 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:53:54.006587 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:53:54.006597 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:53:54.006608 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:53:54.006618 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:53:54.006629 | orchestrator | 2026-02-18 02:53:54.006640 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-18 02:53:54.006650 | orchestrator | Wednesday 18 February 2026 02:53:53 +0000 (0:00:00.773) 0:03:52.137 **** 2026-02-18 02:53:54.006663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:53:54.006684 | orchestrator | 2026-02-18 02:53:54.006696 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-18 02:53:54.006716 | orchestrator | Wednesday 18 February 2026 02:53:53 +0000 (0:00:00.445) 0:03:52.583 **** 2026-02-18 02:55:14.015833 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.016056 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:55:14.016078 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:55:14.016090 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:55:14.016101 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:55:14.016112 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:55:14.016123 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:55:14.016135 | orchestrator | 2026-02-18 02:55:14.016151 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-18 02:55:14.016172 | orchestrator | Wednesday 18 February 2026 02:54:02 +0000 (0:00:08.205) 0:04:00.789 **** 2026-02-18 02:55:14.016190 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.016209 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:14.016227 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:14.016246 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:14.016266 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:14.016284 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:14.016303 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:14.016314 | orchestrator | 2026-02-18 02:55:14.016325 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-18 02:55:14.016339 | orchestrator | Wednesday 18 February 2026 02:54:03 +0000 (0:00:01.460) 0:04:02.249 **** 2026-02-18 02:55:14.016352 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.016365 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:14.016378 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:14.016390 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:14.016402 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:14.016415 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:14.016427 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:14.016439 | orchestrator | 2026-02-18 02:55:14.016452 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-18 02:55:14.016465 | orchestrator | Wednesday 18 February 2026 02:54:04 +0000 (0:00:01.154) 0:04:03.404 **** 2026-02-18 02:55:14.016477 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.016490 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:14.016503 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:14.016515 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:14.016529 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:14.016542 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:14.016554 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:14.016566 | orchestrator | 2026-02-18 02:55:14.016582 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-18 02:55:14.016602 | orchestrator | Wednesday 18 February 2026 02:54:05 +0000 (0:00:00.302) 0:04:03.706 **** 2026-02-18 02:55:14.016635 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.016653 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:14.016670 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:14.016687 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:14.016704 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:14.016719 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:14.016735 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:14.016755 | orchestrator | 2026-02-18 02:55:14.016773 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-18 02:55:14.016792 | orchestrator | Wednesday 18 February 2026 02:54:05 +0000 (0:00:00.329) 0:04:04.036 **** 2026-02-18 02:55:14.016812 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.016831 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:14.016850 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:14.016891 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:14.016903 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:14.016945 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:14.016957 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:14.016967 | orchestrator | 2026-02-18 02:55:14.016978 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-18 02:55:14.016990 | orchestrator | Wednesday 18 February 2026 02:54:05 +0000 (0:00:00.303) 0:04:04.339 **** 2026-02-18 02:55:14.017001 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:14.017011 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.017022 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:14.017033 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:14.017043 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:14.017054 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:14.017065 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:14.017076 | orchestrator | 2026-02-18 02:55:14.017087 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-18 02:55:14.017097 | orchestrator | Wednesday 18 February 2026 02:54:12 +0000 (0:00:06.490) 0:04:10.829 **** 2026-02-18 02:55:14.017111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:55:14.017124 | orchestrator | 2026-02-18 02:55:14.017135 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-18 02:55:14.017146 | orchestrator | Wednesday 18 February 2026 02:54:12 +0000 (0:00:00.422) 0:04:11.251 **** 2026-02-18 02:55:14.017157 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-18 02:55:14.017168 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-18 02:55:14.017179 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-18 02:55:14.017190 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-18 02:55:14.017201 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:14.017230 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-18 02:55:14.017241 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-18 02:55:14.017252 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:14.017263 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-18 02:55:14.017274 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:14.017285 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-18 02:55:14.017296 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:55:14.017306 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-18 02:55:14.017317 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-18 02:55:14.017328 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-18 02:55:14.017339 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-18 02:55:14.017371 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:55:14.017382 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:55:14.017392 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-18 02:55:14.017403 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-18 02:55:14.017414 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:55:14.017425 | orchestrator | 2026-02-18 02:55:14.017436 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-18 02:55:14.017447 | orchestrator | Wednesday 18 February 2026 02:54:13 +0000 (0:00:00.387) 0:04:11.638 **** 2026-02-18 02:55:14.017458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:55:14.017469 | orchestrator | 2026-02-18 02:55:14.017480 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-18 02:55:14.017500 | orchestrator | Wednesday 18 February 2026 02:54:13 +0000 (0:00:00.438) 0:04:12.077 **** 2026-02-18 02:55:14.017511 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-18 02:55:14.017522 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-18 02:55:14.017532 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:14.017543 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-18 02:55:14.017554 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:14.017565 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:14.017575 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-18 02:55:14.017586 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:55:14.017597 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-18 02:55:14.017607 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-18 02:55:14.017618 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:55:14.017629 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:55:14.017639 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-18 02:55:14.017650 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:55:14.017661 | orchestrator | 2026-02-18 02:55:14.017671 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-18 02:55:14.017682 | orchestrator | Wednesday 18 February 2026 02:54:13 +0000 (0:00:00.360) 0:04:12.437 **** 2026-02-18 02:55:14.017693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:55:14.017704 | orchestrator | 2026-02-18 02:55:14.017715 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-18 02:55:14.017726 | orchestrator | Wednesday 18 February 2026 02:54:14 +0000 (0:00:00.445) 0:04:12.883 **** 2026-02-18 02:55:14.017737 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:55:14.017747 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:55:14.017758 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:55:14.017769 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:55:14.017784 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:55:14.017795 | orchestrator | changed: [testbed-manager] 2026-02-18 02:55:14.017806 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:55:14.017817 | orchestrator | 2026-02-18 02:55:14.017827 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-18 02:55:14.017838 | orchestrator | Wednesday 18 February 2026 02:54:49 +0000 (0:00:34.774) 0:04:47.657 **** 2026-02-18 02:55:14.017849 | orchestrator | changed: [testbed-manager] 2026-02-18 02:55:14.017860 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:55:14.017870 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:55:14.017881 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:55:14.017891 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:55:14.017902 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:55:14.017938 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:55:14.017949 | orchestrator | 2026-02-18 02:55:14.017960 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-18 02:55:14.017971 | orchestrator | Wednesday 18 February 2026 02:54:58 +0000 (0:00:08.999) 0:04:56.657 **** 2026-02-18 02:55:14.017982 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:55:14.017992 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:55:14.018003 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:55:14.018013 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:55:14.018088 | orchestrator | changed: [testbed-manager] 2026-02-18 02:55:14.018100 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:55:14.018110 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:55:14.018121 | orchestrator | 2026-02-18 02:55:14.018132 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-18 02:55:14.018150 | orchestrator | Wednesday 18 February 2026 02:55:06 +0000 (0:00:08.043) 0:05:04.700 **** 2026-02-18 02:55:14.018161 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:14.018172 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:14.018182 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:14.018193 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:14.018203 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:14.018214 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:14.018224 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:14.018235 | orchestrator | 2026-02-18 02:55:14.018246 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-18 02:55:14.018257 | orchestrator | Wednesday 18 February 2026 02:55:07 +0000 (0:00:01.851) 0:05:06.552 **** 2026-02-18 02:55:14.018267 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:55:14.018278 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:55:14.018288 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:55:14.018299 | orchestrator | changed: [testbed-manager] 2026-02-18 02:55:14.018310 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:55:14.018320 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:55:14.018331 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:55:14.018342 | orchestrator | 2026-02-18 02:55:14.018361 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-18 02:55:25.915028 | orchestrator | Wednesday 18 February 2026 02:55:13 +0000 (0:00:06.028) 0:05:12.580 **** 2026-02-18 02:55:25.915138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:55:25.915153 | orchestrator | 2026-02-18 02:55:25.915164 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-18 02:55:25.915173 | orchestrator | Wednesday 18 February 2026 02:55:14 +0000 (0:00:00.482) 0:05:13.063 **** 2026-02-18 02:55:25.915183 | orchestrator | changed: [testbed-manager] 2026-02-18 02:55:25.915193 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:55:25.915202 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:55:25.915211 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:55:25.915219 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:55:25.915228 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:55:25.915237 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:55:25.915246 | orchestrator | 2026-02-18 02:55:25.915255 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-18 02:55:25.915264 | orchestrator | Wednesday 18 February 2026 02:55:15 +0000 (0:00:00.716) 0:05:13.779 **** 2026-02-18 02:55:25.915273 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:25.915283 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:25.915292 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:25.915301 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:25.915310 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:25.915319 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:25.915327 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:25.915336 | orchestrator | 2026-02-18 02:55:25.915345 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-18 02:55:25.915354 | orchestrator | Wednesday 18 February 2026 02:55:17 +0000 (0:00:01.849) 0:05:15.629 **** 2026-02-18 02:55:25.915363 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:55:25.915372 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:55:25.915380 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:55:25.915389 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:55:25.915398 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:55:25.915407 | orchestrator | changed: [testbed-manager] 2026-02-18 02:55:25.915416 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:55:25.915425 | orchestrator | 2026-02-18 02:55:25.915434 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-18 02:55:25.915443 | orchestrator | Wednesday 18 February 2026 02:55:17 +0000 (0:00:00.791) 0:05:16.420 **** 2026-02-18 02:55:25.915471 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:25.915480 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:25.915488 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:25.915497 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:55:25.915513 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:55:25.915536 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:55:25.915551 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:55:25.915567 | orchestrator | 2026-02-18 02:55:25.915585 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-18 02:55:25.915602 | orchestrator | Wednesday 18 February 2026 02:55:18 +0000 (0:00:00.336) 0:05:16.757 **** 2026-02-18 02:55:25.915618 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:25.915629 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:25.915639 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:25.915662 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:55:25.915673 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:55:25.915683 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:55:25.915693 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:55:25.915703 | orchestrator | 2026-02-18 02:55:25.915713 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-18 02:55:25.915723 | orchestrator | Wednesday 18 February 2026 02:55:18 +0000 (0:00:00.423) 0:05:17.181 **** 2026-02-18 02:55:25.915733 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:25.915742 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:25.915756 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:25.915771 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:25.915785 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:25.915800 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:25.915814 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:25.915830 | orchestrator | 2026-02-18 02:55:25.915844 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-18 02:55:25.915859 | orchestrator | Wednesday 18 February 2026 02:55:18 +0000 (0:00:00.310) 0:05:17.491 **** 2026-02-18 02:55:25.915868 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:25.915877 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:25.915886 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:25.915894 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:55:25.915903 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:55:25.915911 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:55:25.915920 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:55:25.915952 | orchestrator | 2026-02-18 02:55:25.915962 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-18 02:55:25.915972 | orchestrator | Wednesday 18 February 2026 02:55:19 +0000 (0:00:00.287) 0:05:17.778 **** 2026-02-18 02:55:25.915981 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:25.915990 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:25.915998 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:25.916007 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:25.916015 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:25.916024 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:25.916032 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:25.916041 | orchestrator | 2026-02-18 02:55:25.916050 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-18 02:55:25.916059 | orchestrator | Wednesday 18 February 2026 02:55:19 +0000 (0:00:00.356) 0:05:18.135 **** 2026-02-18 02:55:25.916067 | orchestrator | ok: [testbed-manager] =>  2026-02-18 02:55:25.916076 | orchestrator |  docker_version: 5:27.5.1 2026-02-18 02:55:25.916084 | orchestrator | ok: [testbed-node-3] =>  2026-02-18 02:55:25.916093 | orchestrator |  docker_version: 5:27.5.1 2026-02-18 02:55:25.916101 | orchestrator | ok: [testbed-node-4] =>  2026-02-18 02:55:25.916110 | orchestrator |  docker_version: 5:27.5.1 2026-02-18 02:55:25.916118 | orchestrator | ok: [testbed-node-5] =>  2026-02-18 02:55:25.916126 | orchestrator |  docker_version: 5:27.5.1 2026-02-18 02:55:25.916160 | orchestrator | ok: [testbed-node-0] =>  2026-02-18 02:55:25.916170 | orchestrator |  docker_version: 5:27.5.1 2026-02-18 02:55:25.916184 | orchestrator | ok: [testbed-node-1] =>  2026-02-18 02:55:25.916198 | orchestrator |  docker_version: 5:27.5.1 2026-02-18 02:55:25.916212 | orchestrator | ok: [testbed-node-2] =>  2026-02-18 02:55:25.916226 | orchestrator |  docker_version: 5:27.5.1 2026-02-18 02:55:25.916241 | orchestrator | 2026-02-18 02:55:25.916257 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-18 02:55:25.916272 | orchestrator | Wednesday 18 February 2026 02:55:19 +0000 (0:00:00.303) 0:05:18.438 **** 2026-02-18 02:55:25.916287 | orchestrator | ok: [testbed-manager] =>  2026-02-18 02:55:25.916298 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-18 02:55:25.916307 | orchestrator | ok: [testbed-node-3] =>  2026-02-18 02:55:25.916315 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-18 02:55:25.916324 | orchestrator | ok: [testbed-node-4] =>  2026-02-18 02:55:25.916332 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-18 02:55:25.916341 | orchestrator | ok: [testbed-node-5] =>  2026-02-18 02:55:25.916349 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-18 02:55:25.916358 | orchestrator | ok: [testbed-node-0] =>  2026-02-18 02:55:25.916366 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-18 02:55:25.916375 | orchestrator | ok: [testbed-node-1] =>  2026-02-18 02:55:25.916383 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-18 02:55:25.916392 | orchestrator | ok: [testbed-node-2] =>  2026-02-18 02:55:25.916400 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-18 02:55:25.916409 | orchestrator | 2026-02-18 02:55:25.916418 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-18 02:55:25.916427 | orchestrator | Wednesday 18 February 2026 02:55:20 +0000 (0:00:00.337) 0:05:18.776 **** 2026-02-18 02:55:25.916435 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:25.916444 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:25.916452 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:25.916461 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:55:25.916469 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:55:25.916478 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:55:25.916487 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:55:25.916495 | orchestrator | 2026-02-18 02:55:25.916504 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-18 02:55:25.916513 | orchestrator | Wednesday 18 February 2026 02:55:20 +0000 (0:00:00.304) 0:05:19.080 **** 2026-02-18 02:55:25.916521 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:25.916530 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:25.916538 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:25.916547 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:55:25.916555 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:55:25.916564 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:55:25.916573 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:55:25.916581 | orchestrator | 2026-02-18 02:55:25.916590 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-18 02:55:25.916599 | orchestrator | Wednesday 18 February 2026 02:55:20 +0000 (0:00:00.308) 0:05:19.388 **** 2026-02-18 02:55:25.916609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:55:25.916620 | orchestrator | 2026-02-18 02:55:25.916635 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-18 02:55:25.916644 | orchestrator | Wednesday 18 February 2026 02:55:21 +0000 (0:00:00.473) 0:05:19.862 **** 2026-02-18 02:55:25.916653 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:25.916662 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:25.916670 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:25.916679 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:25.916689 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:25.916713 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:25.916723 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:25.916732 | orchestrator | 2026-02-18 02:55:25.916740 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-18 02:55:25.916749 | orchestrator | Wednesday 18 February 2026 02:55:22 +0000 (0:00:01.178) 0:05:21.040 **** 2026-02-18 02:55:25.916758 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:55:25.916766 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:55:25.916774 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:55:25.916783 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:55:25.916791 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:55:25.916799 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:55:25.916808 | orchestrator | ok: [testbed-manager] 2026-02-18 02:55:25.916817 | orchestrator | 2026-02-18 02:55:25.916825 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-18 02:55:25.916835 | orchestrator | Wednesday 18 February 2026 02:55:25 +0000 (0:00:02.997) 0:05:24.038 **** 2026-02-18 02:55:25.916844 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-18 02:55:25.916853 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-18 02:55:25.916862 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-18 02:55:25.916870 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-18 02:55:25.916879 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-18 02:55:25.916888 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-18 02:55:25.916896 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:55:25.916905 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-18 02:55:25.916913 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-18 02:55:25.916922 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-18 02:55:25.916955 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:55:25.916971 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-18 02:55:25.916986 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-18 02:55:25.917001 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-18 02:55:25.917016 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:55:25.917032 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-18 02:55:25.917050 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-18 02:56:25.632641 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:25.632747 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-18 02:56:25.632761 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-18 02:56:25.632771 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-18 02:56:25.632780 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-18 02:56:25.632789 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:25.632815 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:25.632825 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-18 02:56:25.632844 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-18 02:56:25.632854 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-18 02:56:25.632863 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:25.632872 | orchestrator | 2026-02-18 02:56:25.632882 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-18 02:56:25.632893 | orchestrator | Wednesday 18 February 2026 02:55:26 +0000 (0:00:00.702) 0:05:24.740 **** 2026-02-18 02:56:25.632902 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.632911 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.632920 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.632929 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.632938 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.632947 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.632980 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.632989 | orchestrator | 2026-02-18 02:56:25.632998 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-18 02:56:25.633007 | orchestrator | Wednesday 18 February 2026 02:55:33 +0000 (0:00:06.872) 0:05:31.613 **** 2026-02-18 02:56:25.633016 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633025 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633145 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.633157 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.633166 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633175 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633184 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633193 | orchestrator | 2026-02-18 02:56:25.633201 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-18 02:56:25.633210 | orchestrator | Wednesday 18 February 2026 02:55:34 +0000 (0:00:01.063) 0:05:32.676 **** 2026-02-18 02:56:25.633219 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.633227 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633237 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633246 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633254 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633263 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633271 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.633280 | orchestrator | 2026-02-18 02:56:25.633289 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-18 02:56:25.633298 | orchestrator | Wednesday 18 February 2026 02:55:42 +0000 (0:00:08.247) 0:05:40.923 **** 2026-02-18 02:56:25.633307 | orchestrator | changed: [testbed-manager] 2026-02-18 02:56:25.633315 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633324 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633333 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633341 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.633350 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633359 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633367 | orchestrator | 2026-02-18 02:56:25.633376 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-18 02:56:25.633385 | orchestrator | Wednesday 18 February 2026 02:55:45 +0000 (0:00:03.306) 0:05:44.230 **** 2026-02-18 02:56:25.633394 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.633403 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633411 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.633420 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633429 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633438 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633446 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633455 | orchestrator | 2026-02-18 02:56:25.633464 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-18 02:56:25.633472 | orchestrator | Wednesday 18 February 2026 02:55:46 +0000 (0:00:01.289) 0:05:45.520 **** 2026-02-18 02:56:25.633481 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.633490 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633498 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.633507 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633516 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633524 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633533 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633542 | orchestrator | 2026-02-18 02:56:25.633551 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-18 02:56:25.633560 | orchestrator | Wednesday 18 February 2026 02:55:48 +0000 (0:00:01.609) 0:05:47.130 **** 2026-02-18 02:56:25.633568 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:56:25.633577 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:56:25.633585 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:25.633594 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:25.633611 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:25.633620 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:25.633629 | orchestrator | changed: [testbed-manager] 2026-02-18 02:56:25.633637 | orchestrator | 2026-02-18 02:56:25.633646 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-18 02:56:25.633655 | orchestrator | Wednesday 18 February 2026 02:55:49 +0000 (0:00:00.663) 0:05:47.793 **** 2026-02-18 02:56:25.633663 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.633672 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633681 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633689 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633698 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633706 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633715 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.633723 | orchestrator | 2026-02-18 02:56:25.633735 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-18 02:56:25.633768 | orchestrator | Wednesday 18 February 2026 02:55:58 +0000 (0:00:09.315) 0:05:57.109 **** 2026-02-18 02:56:25.633783 | orchestrator | changed: [testbed-manager] 2026-02-18 02:56:25.633795 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633808 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.633820 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633833 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633845 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633858 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633872 | orchestrator | 2026-02-18 02:56:25.633885 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-18 02:56:25.633899 | orchestrator | Wednesday 18 February 2026 02:55:59 +0000 (0:00:00.923) 0:05:58.032 **** 2026-02-18 02:56:25.633913 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.633929 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.633944 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.633959 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.633975 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.633985 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.633993 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.634001 | orchestrator | 2026-02-18 02:56:25.634010 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-18 02:56:25.634099 | orchestrator | Wednesday 18 February 2026 02:56:08 +0000 (0:00:08.761) 0:06:06.794 **** 2026-02-18 02:56:25.634108 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.634117 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.634125 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.634134 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.634143 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.634151 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.634160 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.634168 | orchestrator | 2026-02-18 02:56:25.634177 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-18 02:56:25.634186 | orchestrator | Wednesday 18 February 2026 02:56:18 +0000 (0:00:10.776) 0:06:17.571 **** 2026-02-18 02:56:25.634194 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-18 02:56:25.634235 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-18 02:56:25.634244 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-18 02:56:25.634253 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-18 02:56:25.634262 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-18 02:56:25.634271 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-18 02:56:25.634279 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-18 02:56:25.634288 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-18 02:56:25.634297 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-18 02:56:25.634315 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-18 02:56:25.634323 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-18 02:56:25.634375 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-18 02:56:25.634384 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-18 02:56:25.634393 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-18 02:56:25.634401 | orchestrator | 2026-02-18 02:56:25.634410 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-18 02:56:25.634419 | orchestrator | Wednesday 18 February 2026 02:56:20 +0000 (0:00:01.285) 0:06:18.856 **** 2026-02-18 02:56:25.634431 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:56:25.634440 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:56:25.634449 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:56:25.634457 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:25.634466 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:25.634474 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:25.634483 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:25.634491 | orchestrator | 2026-02-18 02:56:25.634500 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-18 02:56:25.634508 | orchestrator | Wednesday 18 February 2026 02:56:20 +0000 (0:00:00.540) 0:06:19.396 **** 2026-02-18 02:56:25.634517 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:25.634526 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:25.634534 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:25.634543 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:25.634551 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:25.634560 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:25.634568 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:25.634576 | orchestrator | 2026-02-18 02:56:25.634585 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-18 02:56:25.634595 | orchestrator | Wednesday 18 February 2026 02:56:24 +0000 (0:00:03.711) 0:06:23.108 **** 2026-02-18 02:56:25.634603 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:56:25.634612 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:56:25.634621 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:56:25.634629 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:25.634637 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:25.634646 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:25.634654 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:25.634663 | orchestrator | 2026-02-18 02:56:25.634673 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-18 02:56:25.634682 | orchestrator | Wednesday 18 February 2026 02:56:25 +0000 (0:00:00.560) 0:06:23.669 **** 2026-02-18 02:56:25.634690 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-18 02:56:25.634699 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-18 02:56:25.634708 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:56:25.634717 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-18 02:56:25.634725 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-18 02:56:25.634733 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:56:25.634742 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-18 02:56:25.634751 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-18 02:56:25.634759 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:56:25.634778 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-18 02:56:45.480523 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-18 02:56:45.480615 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:45.480622 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-18 02:56:45.480627 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-18 02:56:45.480631 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:45.480651 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-18 02:56:45.480655 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-18 02:56:45.480659 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:45.480662 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-18 02:56:45.480666 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-18 02:56:45.480670 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:45.480674 | orchestrator | 2026-02-18 02:56:45.480679 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-18 02:56:45.480684 | orchestrator | Wednesday 18 February 2026 02:56:25 +0000 (0:00:00.816) 0:06:24.486 **** 2026-02-18 02:56:45.480688 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:56:45.480692 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:56:45.480696 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:56:45.480699 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:45.480703 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:45.480707 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:45.480710 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:45.480714 | orchestrator | 2026-02-18 02:56:45.480718 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-18 02:56:45.480722 | orchestrator | Wednesday 18 February 2026 02:56:26 +0000 (0:00:00.556) 0:06:25.042 **** 2026-02-18 02:56:45.480726 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:56:45.480729 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:56:45.480733 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:56:45.480737 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:45.480740 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:45.480744 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:45.480748 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:45.480751 | orchestrator | 2026-02-18 02:56:45.480755 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-18 02:56:45.480759 | orchestrator | Wednesday 18 February 2026 02:56:27 +0000 (0:00:00.553) 0:06:25.596 **** 2026-02-18 02:56:45.480762 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:56:45.480766 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:56:45.480770 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:56:45.480773 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:56:45.480777 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:56:45.480781 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:56:45.480784 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:56:45.480788 | orchestrator | 2026-02-18 02:56:45.480792 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-18 02:56:45.480796 | orchestrator | Wednesday 18 February 2026 02:56:27 +0000 (0:00:00.587) 0:06:26.183 **** 2026-02-18 02:56:45.480800 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.480803 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:56:45.480807 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:56:45.480811 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:56:45.480815 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:56:45.480818 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:56:45.480822 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:56:45.480826 | orchestrator | 2026-02-18 02:56:45.480829 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-18 02:56:45.480833 | orchestrator | Wednesday 18 February 2026 02:56:29 +0000 (0:00:01.928) 0:06:28.111 **** 2026-02-18 02:56:45.480838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:56:45.480843 | orchestrator | 2026-02-18 02:56:45.480847 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-18 02:56:45.480851 | orchestrator | Wednesday 18 February 2026 02:56:30 +0000 (0:00:00.952) 0:06:29.064 **** 2026-02-18 02:56:45.480865 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.480869 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:45.480873 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:45.480877 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:45.480881 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:45.480884 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:45.480888 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:45.480892 | orchestrator | 2026-02-18 02:56:45.480895 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-18 02:56:45.480899 | orchestrator | Wednesday 18 February 2026 02:56:31 +0000 (0:00:00.818) 0:06:29.882 **** 2026-02-18 02:56:45.480903 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.480906 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:45.480910 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:45.480914 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:45.480917 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:45.480921 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:45.480925 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:45.480928 | orchestrator | 2026-02-18 02:56:45.480932 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-18 02:56:45.480936 | orchestrator | Wednesday 18 February 2026 02:56:32 +0000 (0:00:00.856) 0:06:30.739 **** 2026-02-18 02:56:45.480939 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.480943 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:45.480947 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:45.480950 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:45.480954 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:45.480958 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:45.480961 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:45.480965 | orchestrator | 2026-02-18 02:56:45.480969 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-18 02:56:45.480982 | orchestrator | Wednesday 18 February 2026 02:56:33 +0000 (0:00:01.604) 0:06:32.343 **** 2026-02-18 02:56:45.480986 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:56:45.480990 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:56:45.480993 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:56:45.480997 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:56:45.481001 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:56:45.481005 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:56:45.481008 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:56:45.481012 | orchestrator | 2026-02-18 02:56:45.481016 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-18 02:56:45.481019 | orchestrator | Wednesday 18 February 2026 02:56:35 +0000 (0:00:01.405) 0:06:33.749 **** 2026-02-18 02:56:45.481023 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.481027 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:45.481030 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:45.481034 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:45.481038 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:45.481041 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:45.481045 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:45.481049 | orchestrator | 2026-02-18 02:56:45.481052 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-18 02:56:45.481056 | orchestrator | Wednesday 18 February 2026 02:56:36 +0000 (0:00:01.320) 0:06:35.070 **** 2026-02-18 02:56:45.481060 | orchestrator | changed: [testbed-manager] 2026-02-18 02:56:45.481063 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:56:45.481091 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:56:45.481095 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:56:45.481100 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:56:45.481104 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:56:45.481108 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:56:45.481112 | orchestrator | 2026-02-18 02:56:45.481120 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-18 02:56:45.481125 | orchestrator | Wednesday 18 February 2026 02:56:37 +0000 (0:00:01.509) 0:06:36.580 **** 2026-02-18 02:56:45.481129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:56:45.481134 | orchestrator | 2026-02-18 02:56:45.481138 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-18 02:56:45.481143 | orchestrator | Wednesday 18 February 2026 02:56:39 +0000 (0:00:01.161) 0:06:37.741 **** 2026-02-18 02:56:45.481147 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.481151 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:56:45.481156 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:56:45.481160 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:56:45.481164 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:56:45.481168 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:56:45.481172 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:56:45.481177 | orchestrator | 2026-02-18 02:56:45.481182 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-18 02:56:45.481186 | orchestrator | Wednesday 18 February 2026 02:56:40 +0000 (0:00:01.342) 0:06:39.084 **** 2026-02-18 02:56:45.481190 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:56:45.481194 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:56:45.481199 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.481203 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:56:45.481207 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:56:45.481223 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:56:45.481228 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:56:45.481231 | orchestrator | 2026-02-18 02:56:45.481235 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-18 02:56:45.481239 | orchestrator | Wednesday 18 February 2026 02:56:41 +0000 (0:00:01.116) 0:06:40.201 **** 2026-02-18 02:56:45.481243 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.481246 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:56:45.481250 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:56:45.481254 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:56:45.481257 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:56:45.481261 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:56:45.481265 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:56:45.481268 | orchestrator | 2026-02-18 02:56:45.481272 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-18 02:56:45.481276 | orchestrator | Wednesday 18 February 2026 02:56:42 +0000 (0:00:01.173) 0:06:41.375 **** 2026-02-18 02:56:45.481279 | orchestrator | ok: [testbed-manager] 2026-02-18 02:56:45.481283 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:56:45.481287 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:56:45.481290 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:56:45.481294 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:56:45.481298 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:56:45.481301 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:56:45.481305 | orchestrator | 2026-02-18 02:56:45.481309 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-18 02:56:45.481312 | orchestrator | Wednesday 18 February 2026 02:56:44 +0000 (0:00:01.377) 0:06:42.753 **** 2026-02-18 02:56:45.481316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:56:45.481320 | orchestrator | 2026-02-18 02:56:45.481323 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-18 02:56:45.481327 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.962) 0:06:43.716 **** 2026-02-18 02:56:45.481331 | orchestrator | 2026-02-18 02:56:45.481335 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-18 02:56:45.481342 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.045) 0:06:43.761 **** 2026-02-18 02:56:45.481345 | orchestrator | 2026-02-18 02:56:45.481349 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-18 02:56:45.481353 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.049) 0:06:43.810 **** 2026-02-18 02:56:45.481356 | orchestrator | 2026-02-18 02:56:45.481360 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-18 02:56:45.481367 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.041) 0:06:43.851 **** 2026-02-18 02:57:12.680395 | orchestrator | 2026-02-18 02:57:12.680503 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-18 02:57:12.680518 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.041) 0:06:43.893 **** 2026-02-18 02:57:12.680528 | orchestrator | 2026-02-18 02:57:12.680537 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-18 02:57:12.680546 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.056) 0:06:43.950 **** 2026-02-18 02:57:12.680555 | orchestrator | 2026-02-18 02:57:12.680564 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-18 02:57:12.680573 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.044) 0:06:43.994 **** 2026-02-18 02:57:12.680581 | orchestrator | 2026-02-18 02:57:12.680590 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-18 02:57:12.680599 | orchestrator | Wednesday 18 February 2026 02:56:45 +0000 (0:00:00.052) 0:06:44.046 **** 2026-02-18 02:57:12.680608 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:12.680618 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:12.680627 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:12.680636 | orchestrator | 2026-02-18 02:57:12.680644 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-18 02:57:12.680653 | orchestrator | Wednesday 18 February 2026 02:56:46 +0000 (0:00:01.072) 0:06:45.118 **** 2026-02-18 02:57:12.680662 | orchestrator | changed: [testbed-manager] 2026-02-18 02:57:12.680671 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:12.680680 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:12.680689 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:12.680697 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:12.680706 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:12.680714 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:12.680723 | orchestrator | 2026-02-18 02:57:12.680732 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-18 02:57:12.680740 | orchestrator | Wednesday 18 February 2026 02:56:48 +0000 (0:00:01.536) 0:06:46.655 **** 2026-02-18 02:57:12.680749 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:12.680758 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:12.680766 | orchestrator | changed: [testbed-manager] 2026-02-18 02:57:12.680775 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:12.680783 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:12.680792 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:12.680800 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:12.680809 | orchestrator | 2026-02-18 02:57:12.680818 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-18 02:57:12.680827 | orchestrator | Wednesday 18 February 2026 02:56:49 +0000 (0:00:01.244) 0:06:47.899 **** 2026-02-18 02:57:12.680835 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:12.680844 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:12.680852 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:12.680861 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:12.680870 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:12.680878 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:12.680887 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:12.680896 | orchestrator | 2026-02-18 02:57:12.680904 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-18 02:57:12.680913 | orchestrator | Wednesday 18 February 2026 02:56:51 +0000 (0:00:02.308) 0:06:50.208 **** 2026-02-18 02:57:12.680956 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:12.680967 | orchestrator | 2026-02-18 02:57:12.680978 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-18 02:57:12.680988 | orchestrator | Wednesday 18 February 2026 02:56:51 +0000 (0:00:00.105) 0:06:50.313 **** 2026-02-18 02:57:12.680998 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:12.681008 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:12.681018 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:12.681028 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:12.681038 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:12.681048 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:12.681059 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:12.681069 | orchestrator | 2026-02-18 02:57:12.681079 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-18 02:57:12.681090 | orchestrator | Wednesday 18 February 2026 02:56:52 +0000 (0:00:01.123) 0:06:51.436 **** 2026-02-18 02:57:12.681100 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:12.681110 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:12.681156 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:57:12.681166 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:57:12.681176 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:57:12.681185 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:57:12.681194 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:57:12.681202 | orchestrator | 2026-02-18 02:57:12.681211 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-18 02:57:12.681219 | orchestrator | Wednesday 18 February 2026 02:56:53 +0000 (0:00:00.617) 0:06:52.054 **** 2026-02-18 02:57:12.681229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:57:12.681240 | orchestrator | 2026-02-18 02:57:12.681249 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-18 02:57:12.681258 | orchestrator | Wednesday 18 February 2026 02:56:54 +0000 (0:00:01.208) 0:06:53.262 **** 2026-02-18 02:57:12.681266 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:12.681275 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:12.681283 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:12.681292 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:12.681300 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:12.681309 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:12.681318 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:12.681326 | orchestrator | 2026-02-18 02:57:12.681335 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-18 02:57:12.681343 | orchestrator | Wednesday 18 February 2026 02:56:55 +0000 (0:00:00.952) 0:06:54.215 **** 2026-02-18 02:57:12.681352 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-18 02:57:12.681376 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-18 02:57:12.681386 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-18 02:57:12.681394 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-18 02:57:12.681403 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-18 02:57:12.681412 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-18 02:57:12.681420 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-18 02:57:12.681429 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-18 02:57:12.681438 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-18 02:57:12.681446 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-18 02:57:12.681455 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-18 02:57:12.681463 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-18 02:57:12.681480 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-18 02:57:12.681488 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-18 02:57:12.681497 | orchestrator | 2026-02-18 02:57:12.681506 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-18 02:57:12.681515 | orchestrator | Wednesday 18 February 2026 02:56:58 +0000 (0:00:02.674) 0:06:56.889 **** 2026-02-18 02:57:12.681523 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:12.681532 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:12.681541 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:57:12.681549 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:57:12.681557 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:57:12.681566 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:57:12.681574 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:57:12.681583 | orchestrator | 2026-02-18 02:57:12.681592 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-18 02:57:12.681600 | orchestrator | Wednesday 18 February 2026 02:56:59 +0000 (0:00:00.750) 0:06:57.640 **** 2026-02-18 02:57:12.681610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:57:12.681621 | orchestrator | 2026-02-18 02:57:12.681630 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-18 02:57:12.681638 | orchestrator | Wednesday 18 February 2026 02:56:59 +0000 (0:00:00.887) 0:06:58.527 **** 2026-02-18 02:57:12.681647 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:12.681655 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:12.681664 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:12.681672 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:12.681681 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:12.681690 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:12.681698 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:12.681707 | orchestrator | 2026-02-18 02:57:12.681715 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-18 02:57:12.681724 | orchestrator | Wednesday 18 February 2026 02:57:00 +0000 (0:00:00.924) 0:06:59.452 **** 2026-02-18 02:57:12.681737 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:12.681746 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:12.681755 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:12.681763 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:12.681771 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:12.681780 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:12.681788 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:12.681797 | orchestrator | 2026-02-18 02:57:12.681805 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-18 02:57:12.681814 | orchestrator | Wednesday 18 February 2026 02:57:01 +0000 (0:00:01.080) 0:07:00.532 **** 2026-02-18 02:57:12.681823 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:12.681831 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:12.681840 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:57:12.681848 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:57:12.681857 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:57:12.681865 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:57:12.681874 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:57:12.681882 | orchestrator | 2026-02-18 02:57:12.681891 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-18 02:57:12.681900 | orchestrator | Wednesday 18 February 2026 02:57:02 +0000 (0:00:00.572) 0:07:01.105 **** 2026-02-18 02:57:12.681908 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:12.681917 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:12.681925 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:12.681934 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:12.681943 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:12.681957 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:12.681966 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:12.681974 | orchestrator | 2026-02-18 02:57:12.681983 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-18 02:57:12.681992 | orchestrator | Wednesday 18 February 2026 02:57:04 +0000 (0:00:01.504) 0:07:02.610 **** 2026-02-18 02:57:12.682000 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:12.682009 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:12.682076 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:57:12.682085 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:57:12.682093 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:57:12.682102 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:57:12.682131 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:57:12.682142 | orchestrator | 2026-02-18 02:57:12.682151 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-18 02:57:12.682160 | orchestrator | Wednesday 18 February 2026 02:57:04 +0000 (0:00:00.645) 0:07:03.256 **** 2026-02-18 02:57:12.682169 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:12.682177 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:12.682186 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:12.682195 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:12.682203 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:12.682212 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:12.682227 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:46.276858 | orchestrator | 2026-02-18 02:57:46.276945 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-18 02:57:46.276954 | orchestrator | Wednesday 18 February 2026 02:57:12 +0000 (0:00:07.990) 0:07:11.246 **** 2026-02-18 02:57:46.276960 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.276968 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:46.276974 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:46.276980 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:46.276985 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:46.276991 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:46.276997 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:46.277002 | orchestrator | 2026-02-18 02:57:46.277008 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-18 02:57:46.277013 | orchestrator | Wednesday 18 February 2026 02:57:14 +0000 (0:00:01.685) 0:07:12.932 **** 2026-02-18 02:57:46.277019 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277024 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:46.277029 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:46.277035 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:46.277040 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:46.277045 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:46.277051 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:46.277056 | orchestrator | 2026-02-18 02:57:46.277062 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-18 02:57:46.277067 | orchestrator | Wednesday 18 February 2026 02:57:16 +0000 (0:00:01.889) 0:07:14.821 **** 2026-02-18 02:57:46.277072 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277078 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:46.277083 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:46.277089 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:46.277094 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:46.277099 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:46.277105 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:46.277110 | orchestrator | 2026-02-18 02:57:46.277116 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-18 02:57:46.277121 | orchestrator | Wednesday 18 February 2026 02:57:18 +0000 (0:00:01.780) 0:07:16.602 **** 2026-02-18 02:57:46.277126 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277132 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.277137 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.277160 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.277205 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.277214 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.277222 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.277231 | orchestrator | 2026-02-18 02:57:46.277240 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-18 02:57:46.277248 | orchestrator | Wednesday 18 February 2026 02:57:18 +0000 (0:00:00.878) 0:07:17.480 **** 2026-02-18 02:57:46.277257 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:46.277266 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:46.277274 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:57:46.277283 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:57:46.277291 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:57:46.277300 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:57:46.277309 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:57:46.277317 | orchestrator | 2026-02-18 02:57:46.277325 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-18 02:57:46.277334 | orchestrator | Wednesday 18 February 2026 02:57:19 +0000 (0:00:01.076) 0:07:18.556 **** 2026-02-18 02:57:46.277343 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:46.277352 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:46.277361 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:57:46.277370 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:57:46.277379 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:57:46.277389 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:57:46.277395 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:57:46.277400 | orchestrator | 2026-02-18 02:57:46.277406 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-18 02:57:46.277413 | orchestrator | Wednesday 18 February 2026 02:57:20 +0000 (0:00:00.583) 0:07:19.140 **** 2026-02-18 02:57:46.277419 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277441 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.277448 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.277454 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.277460 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.277467 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.277473 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.277479 | orchestrator | 2026-02-18 02:57:46.277485 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-18 02:57:46.277492 | orchestrator | Wednesday 18 February 2026 02:57:21 +0000 (0:00:00.572) 0:07:19.712 **** 2026-02-18 02:57:46.277498 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277504 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.277510 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.277517 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.277523 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.277529 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.277535 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.277541 | orchestrator | 2026-02-18 02:57:46.277547 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-18 02:57:46.277553 | orchestrator | Wednesday 18 February 2026 02:57:21 +0000 (0:00:00.650) 0:07:20.363 **** 2026-02-18 02:57:46.277559 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277565 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.277571 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.277577 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.277584 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.277590 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.277596 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.277602 | orchestrator | 2026-02-18 02:57:46.277608 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-18 02:57:46.277614 | orchestrator | Wednesday 18 February 2026 02:57:22 +0000 (0:00:00.770) 0:07:21.133 **** 2026-02-18 02:57:46.277620 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277626 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.277638 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.277645 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.277651 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.277657 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.277663 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.277670 | orchestrator | 2026-02-18 02:57:46.277695 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-18 02:57:46.277706 | orchestrator | Wednesday 18 February 2026 02:57:28 +0000 (0:00:05.678) 0:07:26.812 **** 2026-02-18 02:57:46.277715 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:57:46.277725 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:57:46.277735 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:57:46.277745 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:57:46.277754 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:57:46.277763 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:57:46.277770 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:57:46.277775 | orchestrator | 2026-02-18 02:57:46.277780 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-18 02:57:46.277786 | orchestrator | Wednesday 18 February 2026 02:57:28 +0000 (0:00:00.568) 0:07:27.381 **** 2026-02-18 02:57:46.277793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:57:46.277801 | orchestrator | 2026-02-18 02:57:46.277807 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-18 02:57:46.277812 | orchestrator | Wednesday 18 February 2026 02:57:29 +0000 (0:00:01.087) 0:07:28.468 **** 2026-02-18 02:57:46.277818 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277825 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.277834 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.277843 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.277851 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.277860 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.277868 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.277878 | orchestrator | 2026-02-18 02:57:46.277887 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-18 02:57:46.277896 | orchestrator | Wednesday 18 February 2026 02:57:31 +0000 (0:00:01.952) 0:07:30.420 **** 2026-02-18 02:57:46.277905 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.277914 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.277923 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.277932 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.277940 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.277949 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.277958 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.277967 | orchestrator | 2026-02-18 02:57:46.277976 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-18 02:57:46.277986 | orchestrator | Wednesday 18 February 2026 02:57:32 +0000 (0:00:01.167) 0:07:31.587 **** 2026-02-18 02:57:46.277994 | orchestrator | ok: [testbed-manager] 2026-02-18 02:57:46.278004 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:57:46.278062 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:57:46.278069 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:57:46.278075 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:57:46.278080 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:57:46.278085 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:57:46.278091 | orchestrator | 2026-02-18 02:57:46.278096 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-18 02:57:46.278102 | orchestrator | Wednesday 18 February 2026 02:57:33 +0000 (0:00:00.932) 0:07:32.520 **** 2026-02-18 02:57:46.278116 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-18 02:57:46.278128 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-18 02:57:46.278145 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-18 02:57:46.278156 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-18 02:57:46.278162 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-18 02:57:46.278223 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-18 02:57:46.278229 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-18 02:57:46.278235 | orchestrator | 2026-02-18 02:57:46.278240 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-18 02:57:46.278246 | orchestrator | Wednesday 18 February 2026 02:57:35 +0000 (0:00:02.038) 0:07:34.558 **** 2026-02-18 02:57:46.278251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:57:46.278257 | orchestrator | 2026-02-18 02:57:46.278263 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-18 02:57:46.278268 | orchestrator | Wednesday 18 February 2026 02:57:36 +0000 (0:00:00.885) 0:07:35.444 **** 2026-02-18 02:57:46.278273 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:57:46.278279 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:57:46.278284 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:57:46.278290 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:57:46.278295 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:57:46.278300 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:57:46.278306 | orchestrator | changed: [testbed-manager] 2026-02-18 02:57:46.278311 | orchestrator | 2026-02-18 02:57:46.278323 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-18 02:58:18.676046 | orchestrator | Wednesday 18 February 2026 02:57:46 +0000 (0:00:09.401) 0:07:44.845 **** 2026-02-18 02:58:18.676151 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:58:18.676162 | orchestrator | ok: [testbed-manager] 2026-02-18 02:58:18.676171 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:58:18.676177 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:58:18.676184 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:58:18.676190 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:58:18.676197 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:58:18.676203 | orchestrator | 2026-02-18 02:58:18.676269 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-18 02:58:18.676279 | orchestrator | Wednesday 18 February 2026 02:57:48 +0000 (0:00:02.192) 0:07:47.037 **** 2026-02-18 02:58:18.676289 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:58:18.676299 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:58:18.676309 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:58:18.676325 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:58:18.676335 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:58:18.676345 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:58:18.676355 | orchestrator | 2026-02-18 02:58:18.676364 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-18 02:58:18.676374 | orchestrator | Wednesday 18 February 2026 02:57:49 +0000 (0:00:01.379) 0:07:48.417 **** 2026-02-18 02:58:18.676385 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.676396 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.676406 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.676416 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.676426 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.676461 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.676469 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.676475 | orchestrator | 2026-02-18 02:58:18.676482 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-18 02:58:18.676488 | orchestrator | 2026-02-18 02:58:18.676495 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-18 02:58:18.676501 | orchestrator | Wednesday 18 February 2026 02:57:51 +0000 (0:00:01.287) 0:07:49.704 **** 2026-02-18 02:58:18.676507 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:58:18.676514 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:58:18.676520 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:58:18.676526 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:58:18.676533 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:58:18.676539 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:58:18.676545 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:58:18.676552 | orchestrator | 2026-02-18 02:58:18.676558 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-18 02:58:18.676564 | orchestrator | 2026-02-18 02:58:18.676571 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-18 02:58:18.676577 | orchestrator | Wednesday 18 February 2026 02:57:51 +0000 (0:00:00.841) 0:07:50.546 **** 2026-02-18 02:58:18.676583 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.676589 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.676596 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.676603 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.676610 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.676618 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.676625 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.676632 | orchestrator | 2026-02-18 02:58:18.676640 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-18 02:58:18.676658 | orchestrator | Wednesday 18 February 2026 02:57:53 +0000 (0:00:01.409) 0:07:51.956 **** 2026-02-18 02:58:18.676666 | orchestrator | ok: [testbed-manager] 2026-02-18 02:58:18.676673 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:58:18.676680 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:58:18.676688 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:58:18.676695 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:58:18.676704 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:58:18.676715 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:58:18.676725 | orchestrator | 2026-02-18 02:58:18.676735 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-18 02:58:18.676746 | orchestrator | Wednesday 18 February 2026 02:57:54 +0000 (0:00:01.535) 0:07:53.491 **** 2026-02-18 02:58:18.676756 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:58:18.676767 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:58:18.676775 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:58:18.676782 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:58:18.676789 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:58:18.676796 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:58:18.676803 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:58:18.676810 | orchestrator | 2026-02-18 02:58:18.676818 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-18 02:58:18.676825 | orchestrator | Wednesday 18 February 2026 02:57:55 +0000 (0:00:00.572) 0:07:54.064 **** 2026-02-18 02:58:18.676833 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:58:18.676841 | orchestrator | 2026-02-18 02:58:18.676849 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-18 02:58:18.676856 | orchestrator | Wednesday 18 February 2026 02:57:56 +0000 (0:00:01.120) 0:07:55.184 **** 2026-02-18 02:58:18.676866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:58:18.676884 | orchestrator | 2026-02-18 02:58:18.676898 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-18 02:58:18.676912 | orchestrator | Wednesday 18 February 2026 02:57:57 +0000 (0:00:00.891) 0:07:56.076 **** 2026-02-18 02:58:18.676922 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.676932 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.676942 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.676952 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.676962 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.676970 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.676981 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.676990 | orchestrator | 2026-02-18 02:58:18.677018 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-18 02:58:18.677030 | orchestrator | Wednesday 18 February 2026 02:58:06 +0000 (0:00:08.856) 0:08:04.933 **** 2026-02-18 02:58:18.677041 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.677052 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.677062 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.677073 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.677084 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.677093 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.677099 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.677106 | orchestrator | 2026-02-18 02:58:18.677112 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-18 02:58:18.677118 | orchestrator | Wednesday 18 February 2026 02:58:07 +0000 (0:00:01.141) 0:08:06.074 **** 2026-02-18 02:58:18.677124 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.677130 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.677137 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.677143 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.677149 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.677155 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.677161 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.677168 | orchestrator | 2026-02-18 02:58:18.677174 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-18 02:58:18.677181 | orchestrator | Wednesday 18 February 2026 02:58:08 +0000 (0:00:01.402) 0:08:07.477 **** 2026-02-18 02:58:18.677187 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.677193 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.677199 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.677205 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.677211 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.677265 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.677273 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.677279 | orchestrator | 2026-02-18 02:58:18.677285 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-18 02:58:18.677291 | orchestrator | Wednesday 18 February 2026 02:58:10 +0000 (0:00:02.041) 0:08:09.518 **** 2026-02-18 02:58:18.677297 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.677303 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.677309 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.677315 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.677321 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.677327 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.677334 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.677340 | orchestrator | 2026-02-18 02:58:18.677346 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-18 02:58:18.677352 | orchestrator | Wednesday 18 February 2026 02:58:12 +0000 (0:00:01.249) 0:08:10.768 **** 2026-02-18 02:58:18.677358 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.677364 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.677377 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.677383 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.677390 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.677396 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.677402 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.677408 | orchestrator | 2026-02-18 02:58:18.677414 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-18 02:58:18.677420 | orchestrator | 2026-02-18 02:58:18.677433 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-18 02:58:18.677439 | orchestrator | Wednesday 18 February 2026 02:58:13 +0000 (0:00:01.152) 0:08:11.921 **** 2026-02-18 02:58:18.677446 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:58:18.677453 | orchestrator | 2026-02-18 02:58:18.677459 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-18 02:58:18.677465 | orchestrator | Wednesday 18 February 2026 02:58:14 +0000 (0:00:00.963) 0:08:12.884 **** 2026-02-18 02:58:18.677471 | orchestrator | ok: [testbed-manager] 2026-02-18 02:58:18.677477 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:58:18.677483 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:58:18.677489 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:58:18.677495 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:58:18.677501 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:58:18.677507 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:58:18.677513 | orchestrator | 2026-02-18 02:58:18.677519 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-18 02:58:18.677525 | orchestrator | Wednesday 18 February 2026 02:58:15 +0000 (0:00:01.132) 0:08:14.017 **** 2026-02-18 02:58:18.677532 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:18.677538 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:18.677544 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:18.677550 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:18.677556 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:18.677562 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:18.677568 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:18.677574 | orchestrator | 2026-02-18 02:58:18.677580 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-18 02:58:18.677586 | orchestrator | Wednesday 18 February 2026 02:58:16 +0000 (0:00:01.234) 0:08:15.252 **** 2026-02-18 02:58:18.677593 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 02:58:18.677599 | orchestrator | 2026-02-18 02:58:18.677605 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-18 02:58:18.677611 | orchestrator | Wednesday 18 February 2026 02:58:17 +0000 (0:00:00.902) 0:08:16.154 **** 2026-02-18 02:58:18.677617 | orchestrator | ok: [testbed-manager] 2026-02-18 02:58:18.677624 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:58:18.677630 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:58:18.677636 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:58:18.677642 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:58:18.677648 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:58:18.677654 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:58:18.677660 | orchestrator | 2026-02-18 02:58:18.677672 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-18 02:58:20.567615 | orchestrator | Wednesday 18 February 2026 02:58:18 +0000 (0:00:01.094) 0:08:17.249 **** 2026-02-18 02:58:20.567699 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:20.567711 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:20.567719 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:20.567727 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:20.567734 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:20.567742 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:20.567749 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:20.567779 | orchestrator | 2026-02-18 02:58:20.567788 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:58:20.567796 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-18 02:58:20.567805 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-18 02:58:20.567813 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-18 02:58:20.567820 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-18 02:58:20.567827 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-18 02:58:20.567835 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-18 02:58:20.567842 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-18 02:58:20.567849 | orchestrator | 2026-02-18 02:58:20.567856 | orchestrator | 2026-02-18 02:58:20.567864 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:58:20.567871 | orchestrator | Wednesday 18 February 2026 02:58:19 +0000 (0:00:01.260) 0:08:18.509 **** 2026-02-18 02:58:20.567879 | orchestrator | =============================================================================== 2026-02-18 02:58:20.567886 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.26s 2026-02-18 02:58:20.567893 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.77s 2026-02-18 02:58:20.567901 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.07s 2026-02-18 02:58:20.567908 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.20s 2026-02-18 02:58:20.567915 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.77s 2026-02-18 02:58:20.567935 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.43s 2026-02-18 02:58:20.567944 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.78s 2026-02-18 02:58:20.567951 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.40s 2026-02-18 02:58:20.567959 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.32s 2026-02-18 02:58:20.567966 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.00s 2026-02-18 02:58:20.567973 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.86s 2026-02-18 02:58:20.567981 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.76s 2026-02-18 02:58:20.567988 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.25s 2026-02-18 02:58:20.567995 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.21s 2026-02-18 02:58:20.568002 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.04s 2026-02-18 02:58:20.568009 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.99s 2026-02-18 02:58:20.568016 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.87s 2026-02-18 02:58:20.568023 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.49s 2026-02-18 02:58:20.568030 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.03s 2026-02-18 02:58:20.568038 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.68s 2026-02-18 02:58:20.926154 | orchestrator | + osism apply fail2ban 2026-02-18 02:58:34.076222 | orchestrator | 2026-02-18 02:58:34 | INFO  | Task 72621002-e515-4cbd-b5c7-8c9dd4b7ac43 (fail2ban) was prepared for execution. 2026-02-18 02:58:34.076362 | orchestrator | 2026-02-18 02:58:34 | INFO  | It takes a moment until task 72621002-e515-4cbd-b5c7-8c9dd4b7ac43 (fail2ban) has been started and output is visible here. 2026-02-18 02:58:57.137214 | orchestrator | 2026-02-18 02:58:57.137356 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-18 02:58:57.137372 | orchestrator | 2026-02-18 02:58:57.137381 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-18 02:58:57.137391 | orchestrator | Wednesday 18 February 2026 02:58:39 +0000 (0:00:00.303) 0:00:00.303 **** 2026-02-18 02:58:57.137400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 02:58:57.137410 | orchestrator | 2026-02-18 02:58:57.137419 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-18 02:58:57.137428 | orchestrator | Wednesday 18 February 2026 02:58:40 +0000 (0:00:01.234) 0:00:01.538 **** 2026-02-18 02:58:57.137436 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:57.137446 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:57.137454 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:57.137462 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:57.137470 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:57.137478 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:57.137486 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:57.137495 | orchestrator | 2026-02-18 02:58:57.137504 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-18 02:58:57.137512 | orchestrator | Wednesday 18 February 2026 02:58:51 +0000 (0:00:11.463) 0:00:13.001 **** 2026-02-18 02:58:57.137520 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:57.137528 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:57.137536 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:57.137545 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:57.137553 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:57.137561 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:57.137569 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:57.137577 | orchestrator | 2026-02-18 02:58:57.137585 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-18 02:58:57.137593 | orchestrator | Wednesday 18 February 2026 02:58:53 +0000 (0:00:01.498) 0:00:14.500 **** 2026-02-18 02:58:57.137602 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:58:57.137611 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:58:57.137625 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:58:57.137638 | orchestrator | ok: [testbed-manager] 2026-02-18 02:58:57.137657 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:58:57.137675 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:58:57.137689 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:58:57.137702 | orchestrator | 2026-02-18 02:58:57.137715 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-18 02:58:57.137729 | orchestrator | Wednesday 18 February 2026 02:58:54 +0000 (0:00:01.582) 0:00:16.083 **** 2026-02-18 02:58:57.137741 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:58:57.137755 | orchestrator | changed: [testbed-manager] 2026-02-18 02:58:57.137769 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:58:57.137783 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:58:57.137797 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:58:57.137811 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:58:57.137826 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:58:57.137841 | orchestrator | 2026-02-18 02:58:57.137855 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 02:58:57.137870 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:58:57.137907 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:58:57.137916 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:58:57.137925 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:58:57.137933 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:58:57.137941 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:58:57.137949 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 02:58:57.137956 | orchestrator | 2026-02-18 02:58:57.137964 | orchestrator | 2026-02-18 02:58:57.137972 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 02:58:57.137980 | orchestrator | Wednesday 18 February 2026 02:58:56 +0000 (0:00:01.756) 0:00:17.839 **** 2026-02-18 02:58:57.137988 | orchestrator | =============================================================================== 2026-02-18 02:58:57.137996 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.46s 2026-02-18 02:58:57.138004 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.76s 2026-02-18 02:58:57.138012 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-02-18 02:58:57.138079 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.50s 2026-02-18 02:58:57.138088 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-02-18 02:58:57.507754 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-18 02:58:57.507841 | orchestrator | + osism apply network 2026-02-18 02:59:09.626787 | orchestrator | 2026-02-18 02:59:09 | INFO  | Task fd670b8d-160f-4d8a-b328-5ed5afb9594a (network) was prepared for execution. 2026-02-18 02:59:09.626921 | orchestrator | 2026-02-18 02:59:09 | INFO  | It takes a moment until task fd670b8d-160f-4d8a-b328-5ed5afb9594a (network) has been started and output is visible here. 2026-02-18 02:59:40.539677 | orchestrator | 2026-02-18 02:59:40.539788 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-18 02:59:40.539805 | orchestrator | 2026-02-18 02:59:40.539817 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-18 02:59:40.539829 | orchestrator | Wednesday 18 February 2026 02:59:14 +0000 (0:00:00.302) 0:00:00.302 **** 2026-02-18 02:59:40.539840 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.539853 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:59:40.539864 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:59:40.539874 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:59:40.539885 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:59:40.539896 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:59:40.539907 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:59:40.539918 | orchestrator | 2026-02-18 02:59:40.539929 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-18 02:59:40.539940 | orchestrator | Wednesday 18 February 2026 02:59:15 +0000 (0:00:00.782) 0:00:01.084 **** 2026-02-18 02:59:40.539952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 02:59:40.539966 | orchestrator | 2026-02-18 02:59:40.539977 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-18 02:59:40.540011 | orchestrator | Wednesday 18 February 2026 02:59:16 +0000 (0:00:01.398) 0:00:02.483 **** 2026-02-18 02:59:40.540023 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:59:40.540034 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:59:40.540045 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.540056 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:59:40.540066 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:59:40.540077 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:59:40.540087 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:59:40.540098 | orchestrator | 2026-02-18 02:59:40.540109 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-18 02:59:40.540120 | orchestrator | Wednesday 18 February 2026 02:59:18 +0000 (0:00:01.988) 0:00:04.472 **** 2026-02-18 02:59:40.540131 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.540142 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:59:40.540153 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:59:40.540164 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:59:40.540175 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:59:40.540185 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:59:40.540196 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:59:40.540207 | orchestrator | 2026-02-18 02:59:40.540219 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-18 02:59:40.540260 | orchestrator | Wednesday 18 February 2026 02:59:20 +0000 (0:00:01.912) 0:00:06.384 **** 2026-02-18 02:59:40.540275 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-18 02:59:40.540288 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-18 02:59:40.540301 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-18 02:59:40.540314 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-18 02:59:40.540327 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-18 02:59:40.540339 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-18 02:59:40.540350 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-18 02:59:40.540361 | orchestrator | 2026-02-18 02:59:40.540389 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-18 02:59:40.540405 | orchestrator | Wednesday 18 February 2026 02:59:21 +0000 (0:00:01.022) 0:00:07.407 **** 2026-02-18 02:59:40.540417 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 02:59:40.540430 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 02:59:40.540440 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 02:59:40.540451 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 02:59:40.540462 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 02:59:40.540473 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 02:59:40.540484 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 02:59:40.540494 | orchestrator | 2026-02-18 02:59:40.540505 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-18 02:59:40.540516 | orchestrator | Wednesday 18 February 2026 02:59:25 +0000 (0:00:03.827) 0:00:11.235 **** 2026-02-18 02:59:40.540527 | orchestrator | changed: [testbed-manager] 2026-02-18 02:59:40.540538 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:59:40.540548 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:59:40.540559 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:59:40.540570 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:59:40.540580 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:59:40.540591 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:59:40.540602 | orchestrator | 2026-02-18 02:59:40.540613 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-18 02:59:40.540623 | orchestrator | Wednesday 18 February 2026 02:59:26 +0000 (0:00:01.673) 0:00:12.909 **** 2026-02-18 02:59:40.540634 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 02:59:40.540645 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 02:59:40.540655 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 02:59:40.540666 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 02:59:40.540685 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 02:59:40.540696 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 02:59:40.540706 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 02:59:40.540717 | orchestrator | 2026-02-18 02:59:40.540728 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-18 02:59:40.540738 | orchestrator | Wednesday 18 February 2026 02:59:28 +0000 (0:00:01.981) 0:00:14.891 **** 2026-02-18 02:59:40.540749 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.540760 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:59:40.540771 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:59:40.540781 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:59:40.540792 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:59:40.540803 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:59:40.540814 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:59:40.540824 | orchestrator | 2026-02-18 02:59:40.540835 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-18 02:59:40.540875 | orchestrator | Wednesday 18 February 2026 02:59:30 +0000 (0:00:01.273) 0:00:16.164 **** 2026-02-18 02:59:40.540887 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:59:40.540898 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:59:40.540909 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:59:40.540919 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:59:40.540930 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:59:40.540940 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:59:40.540951 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:59:40.540962 | orchestrator | 2026-02-18 02:59:40.540972 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-18 02:59:40.540983 | orchestrator | Wednesday 18 February 2026 02:59:30 +0000 (0:00:00.706) 0:00:16.870 **** 2026-02-18 02:59:40.540994 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.541004 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:59:40.541015 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:59:40.541026 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:59:40.541036 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:59:40.541047 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:59:40.541057 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:59:40.541068 | orchestrator | 2026-02-18 02:59:40.541078 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-18 02:59:40.541089 | orchestrator | Wednesday 18 February 2026 02:59:33 +0000 (0:00:02.294) 0:00:19.165 **** 2026-02-18 02:59:40.541100 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:59:40.541111 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:59:40.541121 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:59:40.541132 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:59:40.541142 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:59:40.541153 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:59:40.541164 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-18 02:59:40.541175 | orchestrator | 2026-02-18 02:59:40.541186 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-18 02:59:40.541197 | orchestrator | Wednesday 18 February 2026 02:59:34 +0000 (0:00:00.946) 0:00:20.112 **** 2026-02-18 02:59:40.541207 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.541218 | orchestrator | changed: [testbed-node-1] 2026-02-18 02:59:40.541229 | orchestrator | changed: [testbed-node-2] 2026-02-18 02:59:40.541262 | orchestrator | changed: [testbed-node-0] 2026-02-18 02:59:40.541273 | orchestrator | changed: [testbed-node-3] 2026-02-18 02:59:40.541284 | orchestrator | changed: [testbed-node-4] 2026-02-18 02:59:40.541295 | orchestrator | changed: [testbed-node-5] 2026-02-18 02:59:40.541306 | orchestrator | 2026-02-18 02:59:40.541317 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-18 02:59:40.541328 | orchestrator | Wednesday 18 February 2026 02:59:35 +0000 (0:00:01.792) 0:00:21.905 **** 2026-02-18 02:59:40.541339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 02:59:40.541360 | orchestrator | 2026-02-18 02:59:40.541371 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-18 02:59:40.541382 | orchestrator | Wednesday 18 February 2026 02:59:37 +0000 (0:00:01.317) 0:00:23.222 **** 2026-02-18 02:59:40.541392 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:59:40.541403 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.541414 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:59:40.541424 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:59:40.541441 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:59:40.541452 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:59:40.541463 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:59:40.541474 | orchestrator | 2026-02-18 02:59:40.541484 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-18 02:59:40.541495 | orchestrator | Wednesday 18 February 2026 02:59:38 +0000 (0:00:01.116) 0:00:24.339 **** 2026-02-18 02:59:40.541506 | orchestrator | ok: [testbed-manager] 2026-02-18 02:59:40.541517 | orchestrator | ok: [testbed-node-0] 2026-02-18 02:59:40.541527 | orchestrator | ok: [testbed-node-1] 2026-02-18 02:59:40.541538 | orchestrator | ok: [testbed-node-2] 2026-02-18 02:59:40.541549 | orchestrator | ok: [testbed-node-3] 2026-02-18 02:59:40.541559 | orchestrator | ok: [testbed-node-4] 2026-02-18 02:59:40.541570 | orchestrator | ok: [testbed-node-5] 2026-02-18 02:59:40.541580 | orchestrator | 2026-02-18 02:59:40.541591 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-18 02:59:40.541602 | orchestrator | Wednesday 18 February 2026 02:59:39 +0000 (0:00:00.942) 0:00:25.281 **** 2026-02-18 02:59:40.541613 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-18 02:59:40.541624 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-18 02:59:40.541634 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-18 02:59:40.541645 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-18 02:59:40.541656 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-18 02:59:40.541666 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-18 02:59:40.541677 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-18 02:59:40.541688 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-18 02:59:40.541698 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-18 02:59:40.541709 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-18 02:59:40.541720 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-18 02:59:40.541731 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-18 02:59:40.541741 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-18 02:59:40.541752 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-18 02:59:40.541763 | orchestrator | 2026-02-18 02:59:40.541781 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-18 02:59:58.761604 | orchestrator | Wednesday 18 February 2026 02:59:40 +0000 (0:00:01.317) 0:00:26.599 **** 2026-02-18 02:59:58.761734 | orchestrator | skipping: [testbed-manager] 2026-02-18 02:59:58.761757 | orchestrator | skipping: [testbed-node-0] 2026-02-18 02:59:58.761775 | orchestrator | skipping: [testbed-node-1] 2026-02-18 02:59:58.761792 | orchestrator | skipping: [testbed-node-2] 2026-02-18 02:59:58.761808 | orchestrator | skipping: [testbed-node-3] 2026-02-18 02:59:58.761824 | orchestrator | skipping: [testbed-node-4] 2026-02-18 02:59:58.761840 | orchestrator | skipping: [testbed-node-5] 2026-02-18 02:59:58.761857 | orchestrator | 2026-02-18 02:59:58.761903 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-18 02:59:58.761919 | orchestrator | Wednesday 18 February 2026 02:59:41 +0000 (0:00:00.701) 0:00:27.300 **** 2026-02-18 02:59:58.761936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2026-02-18 02:59:58.761954 | orchestrator | 2026-02-18 02:59:58.761964 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-18 02:59:58.761973 | orchestrator | Wednesday 18 February 2026 02:59:46 +0000 (0:00:04.967) 0:00:32.268 **** 2026-02-18 02:59:58.761983 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.761995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762004 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762393 | orchestrator | 2026-02-18 02:59:58.762403 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-18 02:59:58.762412 | orchestrator | Wednesday 18 February 2026 02:59:52 +0000 (0:00:06.471) 0:00:38.740 **** 2026-02-18 02:59:58.762421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762430 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-18 02:59:58.762499 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-18 02:59:58.762555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-18 03:00:05.706985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-18 03:00:05.707096 | orchestrator | 2026-02-18 03:00:05.707108 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-18 03:00:05.707118 | orchestrator | Wednesday 18 February 2026 02:59:58 +0000 (0:00:06.071) 0:00:44.811 **** 2026-02-18 03:00:05.707126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:00:05.707185 | orchestrator | 2026-02-18 03:00:05.707192 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-18 03:00:05.707197 | orchestrator | Wednesday 18 February 2026 03:00:00 +0000 (0:00:01.342) 0:00:46.154 **** 2026-02-18 03:00:05.707202 | orchestrator | ok: [testbed-manager] 2026-02-18 03:00:05.707207 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:00:05.707211 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:00:05.707216 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:00:05.707220 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:00:05.707225 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:00:05.707229 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:00:05.707233 | orchestrator | 2026-02-18 03:00:05.707237 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-18 03:00:05.707241 | orchestrator | Wednesday 18 February 2026 03:00:01 +0000 (0:00:01.207) 0:00:47.361 **** 2026-02-18 03:00:05.707245 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-18 03:00:05.707250 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-18 03:00:05.707254 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-18 03:00:05.707258 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-18 03:00:05.707262 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:00:05.707267 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-18 03:00:05.707271 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-18 03:00:05.707275 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-18 03:00:05.707278 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-18 03:00:05.707282 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-18 03:00:05.707297 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-18 03:00:05.707301 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-18 03:00:05.707305 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-18 03:00:05.707309 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:00:05.707327 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-18 03:00:05.707331 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-18 03:00:05.707335 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-18 03:00:05.707339 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:00:05.707343 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-18 03:00:05.707347 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-18 03:00:05.707351 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-18 03:00:05.707354 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-18 03:00:05.707358 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-18 03:00:05.707362 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:00:05.707366 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-18 03:00:05.707370 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-18 03:00:05.707373 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-18 03:00:05.707377 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-18 03:00:05.707381 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:00:05.707385 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:00:05.707389 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-18 03:00:05.707392 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-18 03:00:05.707396 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-18 03:00:05.707400 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-18 03:00:05.707404 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:00:05.707408 | orchestrator | 2026-02-18 03:00:05.707411 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-18 03:00:05.707426 | orchestrator | Wednesday 18 February 2026 03:00:03 +0000 (0:00:02.331) 0:00:49.693 **** 2026-02-18 03:00:05.707430 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:00:05.707434 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:00:05.707438 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:00:05.707441 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:00:05.707445 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:00:05.707449 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:00:05.707453 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:00:05.707457 | orchestrator | 2026-02-18 03:00:05.707460 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-18 03:00:05.707464 | orchestrator | Wednesday 18 February 2026 03:00:04 +0000 (0:00:00.703) 0:00:50.397 **** 2026-02-18 03:00:05.707468 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:00:05.707472 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:00:05.707475 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:00:05.707480 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:00:05.707484 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:00:05.707487 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:00:05.707491 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:00:05.707495 | orchestrator | 2026-02-18 03:00:05.707499 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:00:05.707503 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 03:00:05.707509 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 03:00:05.707519 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 03:00:05.707523 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 03:00:05.707526 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 03:00:05.707530 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 03:00:05.707534 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 03:00:05.707538 | orchestrator | 2026-02-18 03:00:05.707542 | orchestrator | 2026-02-18 03:00:05.707546 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:00:05.707551 | orchestrator | Wednesday 18 February 2026 03:00:05 +0000 (0:00:00.819) 0:00:51.216 **** 2026-02-18 03:00:05.707558 | orchestrator | =============================================================================== 2026-02-18 03:00:05.707563 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.47s 2026-02-18 03:00:05.707567 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.07s 2026-02-18 03:00:05.707572 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.97s 2026-02-18 03:00:05.707577 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.83s 2026-02-18 03:00:05.707582 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.33s 2026-02-18 03:00:05.707589 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.29s 2026-02-18 03:00:05.707595 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.99s 2026-02-18 03:00:05.707601 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.98s 2026-02-18 03:00:05.707608 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.91s 2026-02-18 03:00:05.707614 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.79s 2026-02-18 03:00:05.707620 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-02-18 03:00:05.707626 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.40s 2026-02-18 03:00:05.707632 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.34s 2026-02-18 03:00:05.707638 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2026-02-18 03:00:05.707644 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2026-02-18 03:00:05.707650 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.27s 2026-02-18 03:00:05.707656 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-02-18 03:00:05.707662 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.12s 2026-02-18 03:00:05.707668 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2026-02-18 03:00:05.707674 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-02-18 03:00:06.108029 | orchestrator | + osism apply wireguard 2026-02-18 03:00:18.407891 | orchestrator | 2026-02-18 03:00:18 | INFO  | Task 7a45a4cc-1e28-40c4-8e7b-441721355496 (wireguard) was prepared for execution. 2026-02-18 03:00:18.408009 | orchestrator | 2026-02-18 03:00:18 | INFO  | It takes a moment until task 7a45a4cc-1e28-40c4-8e7b-441721355496 (wireguard) has been started and output is visible here. 2026-02-18 03:00:39.881312 | orchestrator | 2026-02-18 03:00:39.881489 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-18 03:00:39.881510 | orchestrator | 2026-02-18 03:00:39.881523 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-18 03:00:39.881534 | orchestrator | Wednesday 18 February 2026 03:00:23 +0000 (0:00:00.242) 0:00:00.242 **** 2026-02-18 03:00:39.881546 | orchestrator | ok: [testbed-manager] 2026-02-18 03:00:39.881558 | orchestrator | 2026-02-18 03:00:39.881572 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-18 03:00:39.881591 | orchestrator | Wednesday 18 February 2026 03:00:24 +0000 (0:00:01.659) 0:00:01.902 **** 2026-02-18 03:00:39.881609 | orchestrator | changed: [testbed-manager] 2026-02-18 03:00:39.881634 | orchestrator | 2026-02-18 03:00:39.881652 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-18 03:00:39.881670 | orchestrator | Wednesday 18 February 2026 03:00:31 +0000 (0:00:07.058) 0:00:08.960 **** 2026-02-18 03:00:39.881687 | orchestrator | changed: [testbed-manager] 2026-02-18 03:00:39.881704 | orchestrator | 2026-02-18 03:00:39.881740 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-18 03:00:39.881760 | orchestrator | Wednesday 18 February 2026 03:00:32 +0000 (0:00:00.594) 0:00:09.555 **** 2026-02-18 03:00:39.881780 | orchestrator | changed: [testbed-manager] 2026-02-18 03:00:39.881798 | orchestrator | 2026-02-18 03:00:39.881817 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-18 03:00:39.881836 | orchestrator | Wednesday 18 February 2026 03:00:32 +0000 (0:00:00.450) 0:00:10.006 **** 2026-02-18 03:00:39.881853 | orchestrator | ok: [testbed-manager] 2026-02-18 03:00:39.881872 | orchestrator | 2026-02-18 03:00:39.881891 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-18 03:00:39.881910 | orchestrator | Wednesday 18 February 2026 03:00:33 +0000 (0:00:00.695) 0:00:10.702 **** 2026-02-18 03:00:39.881929 | orchestrator | ok: [testbed-manager] 2026-02-18 03:00:39.881948 | orchestrator | 2026-02-18 03:00:39.881967 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-18 03:00:39.881987 | orchestrator | Wednesday 18 February 2026 03:00:33 +0000 (0:00:00.441) 0:00:11.144 **** 2026-02-18 03:00:39.882006 | orchestrator | ok: [testbed-manager] 2026-02-18 03:00:39.882190 | orchestrator | 2026-02-18 03:00:39.882257 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-18 03:00:39.882278 | orchestrator | Wednesday 18 February 2026 03:00:34 +0000 (0:00:00.438) 0:00:11.582 **** 2026-02-18 03:00:39.882297 | orchestrator | changed: [testbed-manager] 2026-02-18 03:00:39.882314 | orchestrator | 2026-02-18 03:00:39.882333 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-18 03:00:39.882351 | orchestrator | Wednesday 18 February 2026 03:00:35 +0000 (0:00:01.220) 0:00:12.803 **** 2026-02-18 03:00:39.882371 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-18 03:00:39.882524 | orchestrator | changed: [testbed-manager] 2026-02-18 03:00:39.882546 | orchestrator | 2026-02-18 03:00:39.882566 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-18 03:00:39.882585 | orchestrator | Wednesday 18 February 2026 03:00:36 +0000 (0:00:01.014) 0:00:13.817 **** 2026-02-18 03:00:39.882605 | orchestrator | changed: [testbed-manager] 2026-02-18 03:00:39.882625 | orchestrator | 2026-02-18 03:00:39.882644 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-18 03:00:39.882664 | orchestrator | Wednesday 18 February 2026 03:00:38 +0000 (0:00:01.768) 0:00:15.586 **** 2026-02-18 03:00:39.882683 | orchestrator | changed: [testbed-manager] 2026-02-18 03:00:39.882702 | orchestrator | 2026-02-18 03:00:39.882722 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:00:39.882741 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:00:39.882762 | orchestrator | 2026-02-18 03:00:39.882782 | orchestrator | 2026-02-18 03:00:39.882801 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:00:39.882842 | orchestrator | Wednesday 18 February 2026 03:00:39 +0000 (0:00:01.027) 0:00:16.614 **** 2026-02-18 03:00:39.882861 | orchestrator | =============================================================================== 2026-02-18 03:00:39.882879 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.06s 2026-02-18 03:00:39.882897 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2026-02-18 03:00:39.882915 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2026-02-18 03:00:39.882933 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-02-18 03:00:39.882953 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.03s 2026-02-18 03:00:39.882970 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-02-18 03:00:39.882986 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-02-18 03:00:39.883004 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-02-18 03:00:39.883022 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-02-18 03:00:39.883039 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-02-18 03:00:39.883058 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-02-18 03:00:40.269455 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-18 03:00:40.308403 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-18 03:00:40.308478 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-18 03:00:40.389304 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 186 0 --:--:-- --:--:-- --:--:-- 187 2026-02-18 03:00:40.402697 | orchestrator | + osism apply --environment custom workarounds 2026-02-18 03:00:42.444539 | orchestrator | 2026-02-18 03:00:42 | INFO  | Trying to run play workarounds in environment custom 2026-02-18 03:00:52.645351 | orchestrator | 2026-02-18 03:00:52 | INFO  | Task a9ea5fab-2e16-4e83-b152-177684453ece (workarounds) was prepared for execution. 2026-02-18 03:00:52.645469 | orchestrator | 2026-02-18 03:00:52 | INFO  | It takes a moment until task a9ea5fab-2e16-4e83-b152-177684453ece (workarounds) has been started and output is visible here. 2026-02-18 03:01:20.242304 | orchestrator | 2026-02-18 03:01:20.242411 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:01:20.242427 | orchestrator | 2026-02-18 03:01:20.242438 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-18 03:01:20.242449 | orchestrator | Wednesday 18 February 2026 03:00:57 +0000 (0:00:00.139) 0:00:00.139 **** 2026-02-18 03:01:20.242459 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-18 03:01:20.242470 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-18 03:01:20.242480 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-18 03:01:20.242490 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-18 03:01:20.242499 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-18 03:01:20.242509 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-18 03:01:20.242519 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-18 03:01:20.242528 | orchestrator | 2026-02-18 03:01:20.242538 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-18 03:01:20.242548 | orchestrator | 2026-02-18 03:01:20.242557 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-18 03:01:20.242567 | orchestrator | Wednesday 18 February 2026 03:00:57 +0000 (0:00:00.873) 0:00:01.012 **** 2026-02-18 03:01:20.242577 | orchestrator | ok: [testbed-manager] 2026-02-18 03:01:20.242610 | orchestrator | 2026-02-18 03:01:20.242620 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-18 03:01:20.242630 | orchestrator | 2026-02-18 03:01:20.242640 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-18 03:01:20.242649 | orchestrator | Wednesday 18 February 2026 03:01:00 +0000 (0:00:02.623) 0:00:03.636 **** 2026-02-18 03:01:20.242659 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:01:20.242669 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:01:20.242678 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:01:20.242687 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:01:20.242697 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:01:20.242706 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:01:20.242715 | orchestrator | 2026-02-18 03:01:20.242725 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-18 03:01:20.242734 | orchestrator | 2026-02-18 03:01:20.242744 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-18 03:01:20.242769 | orchestrator | Wednesday 18 February 2026 03:01:02 +0000 (0:00:01.893) 0:00:05.530 **** 2026-02-18 03:01:20.242780 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-18 03:01:20.242791 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-18 03:01:20.242800 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-18 03:01:20.242810 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-18 03:01:20.242819 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-18 03:01:20.242829 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-18 03:01:20.242838 | orchestrator | 2026-02-18 03:01:20.242848 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-18 03:01:20.242857 | orchestrator | Wednesday 18 February 2026 03:01:03 +0000 (0:00:01.538) 0:00:07.069 **** 2026-02-18 03:01:20.242867 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:01:20.242877 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:01:20.242886 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:01:20.242896 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:01:20.242905 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:01:20.242914 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:01:20.242924 | orchestrator | 2026-02-18 03:01:20.242933 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-18 03:01:20.242943 | orchestrator | Wednesday 18 February 2026 03:01:07 +0000 (0:00:03.876) 0:00:10.945 **** 2026-02-18 03:01:20.242952 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:01:20.242962 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:01:20.242972 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:01:20.242981 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:01:20.243014 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:01:20.243031 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:01:20.243048 | orchestrator | 2026-02-18 03:01:20.243066 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-18 03:01:20.243082 | orchestrator | 2026-02-18 03:01:20.243098 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-18 03:01:20.243112 | orchestrator | Wednesday 18 February 2026 03:01:08 +0000 (0:00:00.779) 0:00:11.725 **** 2026-02-18 03:01:20.243122 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:01:20.243131 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:01:20.243141 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:01:20.243150 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:01:20.243160 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:01:20.243169 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:01:20.243187 | orchestrator | changed: [testbed-manager] 2026-02-18 03:01:20.243197 | orchestrator | 2026-02-18 03:01:20.243207 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-18 03:01:20.243217 | orchestrator | Wednesday 18 February 2026 03:01:10 +0000 (0:00:01.736) 0:00:13.462 **** 2026-02-18 03:01:20.243227 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:01:20.243236 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:01:20.243246 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:01:20.243256 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:01:20.243265 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:01:20.243275 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:01:20.243302 | orchestrator | changed: [testbed-manager] 2026-02-18 03:01:20.243312 | orchestrator | 2026-02-18 03:01:20.243321 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-18 03:01:20.243331 | orchestrator | Wednesday 18 February 2026 03:01:12 +0000 (0:00:01.670) 0:00:15.133 **** 2026-02-18 03:01:20.243340 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:01:20.243350 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:01:20.243360 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:01:20.243369 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:01:20.243378 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:01:20.243388 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:01:20.243397 | orchestrator | ok: [testbed-manager] 2026-02-18 03:01:20.243407 | orchestrator | 2026-02-18 03:01:20.243416 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-18 03:01:20.243426 | orchestrator | Wednesday 18 February 2026 03:01:13 +0000 (0:00:01.648) 0:00:16.781 **** 2026-02-18 03:01:20.243436 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:01:20.243445 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:01:20.243454 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:01:20.243464 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:01:20.243474 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:01:20.243483 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:01:20.243492 | orchestrator | changed: [testbed-manager] 2026-02-18 03:01:20.243502 | orchestrator | 2026-02-18 03:01:20.243511 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-18 03:01:20.243521 | orchestrator | Wednesday 18 February 2026 03:01:15 +0000 (0:00:01.979) 0:00:18.760 **** 2026-02-18 03:01:20.243530 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:01:20.243540 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:01:20.243550 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:01:20.243559 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:01:20.243569 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:01:20.243578 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:01:20.243587 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:01:20.243597 | orchestrator | 2026-02-18 03:01:20.243607 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-18 03:01:20.243616 | orchestrator | 2026-02-18 03:01:20.243626 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-18 03:01:20.243635 | orchestrator | Wednesday 18 February 2026 03:01:16 +0000 (0:00:00.692) 0:00:19.453 **** 2026-02-18 03:01:20.243645 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:01:20.243655 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:01:20.243664 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:01:20.243674 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:01:20.243683 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:01:20.243698 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:01:20.243708 | orchestrator | ok: [testbed-manager] 2026-02-18 03:01:20.243718 | orchestrator | 2026-02-18 03:01:20.243727 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:01:20.243738 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:01:20.243749 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:20.243765 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:20.243774 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:20.243784 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:20.243794 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:20.243803 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:20.243813 | orchestrator | 2026-02-18 03:01:20.243822 | orchestrator | 2026-02-18 03:01:20.243832 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:01:20.243842 | orchestrator | Wednesday 18 February 2026 03:01:20 +0000 (0:00:03.894) 0:00:23.347 **** 2026-02-18 03:01:20.243851 | orchestrator | =============================================================================== 2026-02-18 03:01:20.243861 | orchestrator | Install python3-docker -------------------------------------------------- 3.89s 2026-02-18 03:01:20.243871 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.88s 2026-02-18 03:01:20.243880 | orchestrator | Apply netplan configuration --------------------------------------------- 2.62s 2026-02-18 03:01:20.243890 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.98s 2026-02-18 03:01:20.243899 | orchestrator | Apply netplan configuration --------------------------------------------- 1.89s 2026-02-18 03:01:20.243909 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.74s 2026-02-18 03:01:20.243918 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.67s 2026-02-18 03:01:20.243928 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.65s 2026-02-18 03:01:20.243937 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2026-02-18 03:01:20.243947 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.87s 2026-02-18 03:01:20.243956 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.78s 2026-02-18 03:01:20.243971 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.69s 2026-02-18 03:01:21.082223 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-18 03:01:33.235077 | orchestrator | 2026-02-18 03:01:33 | INFO  | Task 3f26c49b-03e6-4cde-a578-b3549fd23a4f (reboot) was prepared for execution. 2026-02-18 03:01:33.235177 | orchestrator | 2026-02-18 03:01:33 | INFO  | It takes a moment until task 3f26c49b-03e6-4cde-a578-b3549fd23a4f (reboot) has been started and output is visible here. 2026-02-18 03:01:44.143730 | orchestrator | 2026-02-18 03:01:44.143844 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-18 03:01:44.143864 | orchestrator | 2026-02-18 03:01:44.143880 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-18 03:01:44.143894 | orchestrator | Wednesday 18 February 2026 03:01:37 +0000 (0:00:00.250) 0:00:00.250 **** 2026-02-18 03:01:44.143908 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:01:44.143922 | orchestrator | 2026-02-18 03:01:44.143936 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-18 03:01:44.144009 | orchestrator | Wednesday 18 February 2026 03:01:37 +0000 (0:00:00.141) 0:00:00.392 **** 2026-02-18 03:01:44.144024 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:01:44.144038 | orchestrator | 2026-02-18 03:01:44.144052 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-18 03:01:44.144092 | orchestrator | Wednesday 18 February 2026 03:01:38 +0000 (0:00:00.985) 0:00:01.378 **** 2026-02-18 03:01:44.144107 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:01:44.144121 | orchestrator | 2026-02-18 03:01:44.144134 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-18 03:01:44.144148 | orchestrator | 2026-02-18 03:01:44.144162 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-18 03:01:44.144175 | orchestrator | Wednesday 18 February 2026 03:01:39 +0000 (0:00:00.139) 0:00:01.518 **** 2026-02-18 03:01:44.144189 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:01:44.144202 | orchestrator | 2026-02-18 03:01:44.144215 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-18 03:01:44.144229 | orchestrator | Wednesday 18 February 2026 03:01:39 +0000 (0:00:00.129) 0:00:01.647 **** 2026-02-18 03:01:44.144241 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:01:44.144255 | orchestrator | 2026-02-18 03:01:44.144268 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-18 03:01:44.144296 | orchestrator | Wednesday 18 February 2026 03:01:39 +0000 (0:00:00.700) 0:00:02.348 **** 2026-02-18 03:01:44.144311 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:01:44.144326 | orchestrator | 2026-02-18 03:01:44.144339 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-18 03:01:44.144353 | orchestrator | 2026-02-18 03:01:44.144367 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-18 03:01:44.144382 | orchestrator | Wednesday 18 February 2026 03:01:39 +0000 (0:00:00.111) 0:00:02.459 **** 2026-02-18 03:01:44.144396 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:01:44.144410 | orchestrator | 2026-02-18 03:01:44.144423 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-18 03:01:44.144437 | orchestrator | Wednesday 18 February 2026 03:01:40 +0000 (0:00:00.239) 0:00:02.699 **** 2026-02-18 03:01:44.144450 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:01:44.144464 | orchestrator | 2026-02-18 03:01:44.144479 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-18 03:01:44.144493 | orchestrator | Wednesday 18 February 2026 03:01:40 +0000 (0:00:00.709) 0:00:03.409 **** 2026-02-18 03:01:44.144507 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:01:44.144521 | orchestrator | 2026-02-18 03:01:44.144534 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-18 03:01:44.144548 | orchestrator | 2026-02-18 03:01:44.144561 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-18 03:01:44.144576 | orchestrator | Wednesday 18 February 2026 03:01:41 +0000 (0:00:00.110) 0:00:03.520 **** 2026-02-18 03:01:44.144589 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:01:44.144603 | orchestrator | 2026-02-18 03:01:44.144616 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-18 03:01:44.144630 | orchestrator | Wednesday 18 February 2026 03:01:41 +0000 (0:00:00.109) 0:00:03.630 **** 2026-02-18 03:01:44.144644 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:01:44.144658 | orchestrator | 2026-02-18 03:01:44.144671 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-18 03:01:44.144684 | orchestrator | Wednesday 18 February 2026 03:01:41 +0000 (0:00:00.709) 0:00:04.339 **** 2026-02-18 03:01:44.144697 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:01:44.144711 | orchestrator | 2026-02-18 03:01:44.144724 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-18 03:01:44.144738 | orchestrator | 2026-02-18 03:01:44.144751 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-18 03:01:44.144764 | orchestrator | Wednesday 18 February 2026 03:01:41 +0000 (0:00:00.140) 0:00:04.480 **** 2026-02-18 03:01:44.144777 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:01:44.144791 | orchestrator | 2026-02-18 03:01:44.144804 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-18 03:01:44.144826 | orchestrator | Wednesday 18 February 2026 03:01:42 +0000 (0:00:00.132) 0:00:04.612 **** 2026-02-18 03:01:44.144840 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:01:44.144854 | orchestrator | 2026-02-18 03:01:44.144867 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-18 03:01:44.144880 | orchestrator | Wednesday 18 February 2026 03:01:42 +0000 (0:00:00.705) 0:00:05.318 **** 2026-02-18 03:01:44.144892 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:01:44.144907 | orchestrator | 2026-02-18 03:01:44.144920 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-18 03:01:44.144934 | orchestrator | 2026-02-18 03:01:44.144970 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-18 03:01:44.144983 | orchestrator | Wednesday 18 February 2026 03:01:42 +0000 (0:00:00.123) 0:00:05.442 **** 2026-02-18 03:01:44.144996 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:01:44.145010 | orchestrator | 2026-02-18 03:01:44.145024 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-18 03:01:44.145037 | orchestrator | Wednesday 18 February 2026 03:01:43 +0000 (0:00:00.106) 0:00:05.548 **** 2026-02-18 03:01:44.145050 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:01:44.145063 | orchestrator | 2026-02-18 03:01:44.145076 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-18 03:01:44.145090 | orchestrator | Wednesday 18 February 2026 03:01:43 +0000 (0:00:00.647) 0:00:06.196 **** 2026-02-18 03:01:44.145123 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:01:44.145138 | orchestrator | 2026-02-18 03:01:44.145151 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:01:44.145165 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:44.145181 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:44.145195 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:44.145208 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:44.145221 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:44.145234 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:01:44.145248 | orchestrator | 2026-02-18 03:01:44.145261 | orchestrator | 2026-02-18 03:01:44.145275 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:01:44.145288 | orchestrator | Wednesday 18 February 2026 03:01:43 +0000 (0:00:00.039) 0:00:06.236 **** 2026-02-18 03:01:44.145307 | orchestrator | =============================================================================== 2026-02-18 03:01:44.145320 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.46s 2026-02-18 03:01:44.145334 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.86s 2026-02-18 03:01:44.145347 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2026-02-18 03:01:44.510281 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-18 03:01:56.625781 | orchestrator | 2026-02-18 03:01:56 | INFO  | Task 1c79d2cc-5190-4e5c-81a9-6a8ecc8a5367 (wait-for-connection) was prepared for execution. 2026-02-18 03:01:56.625863 | orchestrator | 2026-02-18 03:01:56 | INFO  | It takes a moment until task 1c79d2cc-5190-4e5c-81a9-6a8ecc8a5367 (wait-for-connection) has been started and output is visible here. 2026-02-18 03:02:13.033781 | orchestrator | 2026-02-18 03:02:13.033861 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-18 03:02:13.033869 | orchestrator | 2026-02-18 03:02:13.033874 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-18 03:02:13.033880 | orchestrator | Wednesday 18 February 2026 03:02:01 +0000 (0:00:00.262) 0:00:00.262 **** 2026-02-18 03:02:13.033924 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:02:13.033931 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:02:13.033936 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:02:13.033941 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:02:13.033946 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:02:13.033950 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:02:13.033955 | orchestrator | 2026-02-18 03:02:13.033960 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:02:13.033965 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:02:13.033972 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:02:13.033977 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:02:13.033982 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:02:13.033986 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:02:13.033991 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:02:13.033996 | orchestrator | 2026-02-18 03:02:13.034001 | orchestrator | 2026-02-18 03:02:13.034006 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:02:13.034010 | orchestrator | Wednesday 18 February 2026 03:02:12 +0000 (0:00:11.562) 0:00:11.824 **** 2026-02-18 03:02:13.034052 | orchestrator | =============================================================================== 2026-02-18 03:02:13.034057 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2026-02-18 03:02:13.350579 | orchestrator | + osism apply hddtemp 2026-02-18 03:02:25.569601 | orchestrator | 2026-02-18 03:02:25 | INFO  | Task 98c59557-d396-4553-97a4-108309c94475 (hddtemp) was prepared for execution. 2026-02-18 03:02:25.569716 | orchestrator | 2026-02-18 03:02:25 | INFO  | It takes a moment until task 98c59557-d396-4553-97a4-108309c94475 (hddtemp) has been started and output is visible here. 2026-02-18 03:02:54.768049 | orchestrator | 2026-02-18 03:02:54.768134 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-18 03:02:54.768144 | orchestrator | 2026-02-18 03:02:54.768150 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-18 03:02:54.768157 | orchestrator | Wednesday 18 February 2026 03:02:30 +0000 (0:00:00.287) 0:00:00.287 **** 2026-02-18 03:02:54.768164 | orchestrator | ok: [testbed-manager] 2026-02-18 03:02:54.768171 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:02:54.768177 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:02:54.768183 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:02:54.768189 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:02:54.768195 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:02:54.768201 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:02:54.768207 | orchestrator | 2026-02-18 03:02:54.768213 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-18 03:02:54.768219 | orchestrator | Wednesday 18 February 2026 03:02:30 +0000 (0:00:00.804) 0:00:01.091 **** 2026-02-18 03:02:54.768226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:02:54.768252 | orchestrator | 2026-02-18 03:02:54.768259 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-18 03:02:54.768265 | orchestrator | Wednesday 18 February 2026 03:02:32 +0000 (0:00:01.355) 0:00:02.447 **** 2026-02-18 03:02:54.768271 | orchestrator | ok: [testbed-manager] 2026-02-18 03:02:54.768277 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:02:54.768282 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:02:54.768288 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:02:54.768294 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:02:54.768300 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:02:54.768306 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:02:54.768312 | orchestrator | 2026-02-18 03:02:54.768318 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-18 03:02:54.768335 | orchestrator | Wednesday 18 February 2026 03:02:34 +0000 (0:00:02.048) 0:00:04.495 **** 2026-02-18 03:02:54.768342 | orchestrator | changed: [testbed-manager] 2026-02-18 03:02:54.768348 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:02:54.768354 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:02:54.768360 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:02:54.768365 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:02:54.768371 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:02:54.768377 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:02:54.768382 | orchestrator | 2026-02-18 03:02:54.768388 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-18 03:02:54.768394 | orchestrator | Wednesday 18 February 2026 03:02:35 +0000 (0:00:01.266) 0:00:05.761 **** 2026-02-18 03:02:54.768400 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:02:54.768406 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:02:54.768411 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:02:54.768417 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:02:54.768423 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:02:54.768428 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:02:54.768434 | orchestrator | ok: [testbed-manager] 2026-02-18 03:02:54.768440 | orchestrator | 2026-02-18 03:02:54.768446 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-18 03:02:54.768452 | orchestrator | Wednesday 18 February 2026 03:02:36 +0000 (0:00:01.264) 0:00:07.026 **** 2026-02-18 03:02:54.768457 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:02:54.768463 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:02:54.768469 | orchestrator | changed: [testbed-manager] 2026-02-18 03:02:54.768475 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:02:54.768480 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:02:54.768486 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:02:54.768492 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:02:54.768498 | orchestrator | 2026-02-18 03:02:54.768503 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-18 03:02:54.768509 | orchestrator | Wednesday 18 February 2026 03:02:37 +0000 (0:00:00.901) 0:00:07.928 **** 2026-02-18 03:02:54.768515 | orchestrator | changed: [testbed-manager] 2026-02-18 03:02:54.768520 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:02:54.768526 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:02:54.768532 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:02:54.768538 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:02:54.768544 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:02:54.768549 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:02:54.768555 | orchestrator | 2026-02-18 03:02:54.768561 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-18 03:02:54.768567 | orchestrator | Wednesday 18 February 2026 03:02:50 +0000 (0:00:13.161) 0:00:21.090 **** 2026-02-18 03:02:54.768573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:02:54.768584 | orchestrator | 2026-02-18 03:02:54.768590 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-18 03:02:54.768596 | orchestrator | Wednesday 18 February 2026 03:02:52 +0000 (0:00:01.561) 0:00:22.652 **** 2026-02-18 03:02:54.768603 | orchestrator | changed: [testbed-manager] 2026-02-18 03:02:54.768610 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:02:54.768617 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:02:54.768624 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:02:54.768631 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:02:54.768638 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:02:54.768644 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:02:54.768651 | orchestrator | 2026-02-18 03:02:54.768658 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:02:54.768665 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:02:54.768685 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:02:54.768693 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:02:54.768701 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:02:54.768707 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:02:54.768714 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:02:54.768722 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:02:54.768728 | orchestrator | 2026-02-18 03:02:54.768735 | orchestrator | 2026-02-18 03:02:54.768742 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:02:54.768749 | orchestrator | Wednesday 18 February 2026 03:02:54 +0000 (0:00:01.929) 0:00:24.581 **** 2026-02-18 03:02:54.768756 | orchestrator | =============================================================================== 2026-02-18 03:02:54.768763 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.16s 2026-02-18 03:02:54.768770 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.05s 2026-02-18 03:02:54.768777 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2026-02-18 03:02:54.768787 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.56s 2026-02-18 03:02:54.768794 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.36s 2026-02-18 03:02:54.768801 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.27s 2026-02-18 03:02:54.768832 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.26s 2026-02-18 03:02:54.768843 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.90s 2026-02-18 03:02:54.768854 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.80s 2026-02-18 03:02:55.157339 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-18 03:02:55.208992 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 03:02:55.209075 | orchestrator | + sudo systemctl restart manager.service 2026-02-18 03:03:08.968862 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-18 03:03:08.968955 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-18 03:03:08.968968 | orchestrator | + local max_attempts=60 2026-02-18 03:03:08.968977 | orchestrator | + local name=ceph-ansible 2026-02-18 03:03:08.968986 | orchestrator | + local attempt_num=1 2026-02-18 03:03:08.968994 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:09.003770 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:09.003866 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:09.003874 | orchestrator | + sleep 5 2026-02-18 03:03:14.009021 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:14.050196 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:14.050284 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:14.050296 | orchestrator | + sleep 5 2026-02-18 03:03:19.054518 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:19.093621 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:19.093716 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:19.093731 | orchestrator | + sleep 5 2026-02-18 03:03:24.098910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:24.135487 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:24.135585 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:24.135603 | orchestrator | + sleep 5 2026-02-18 03:03:29.143225 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:29.181962 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:29.182055 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:29.182062 | orchestrator | + sleep 5 2026-02-18 03:03:34.186834 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:34.223662 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:34.223836 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:34.223855 | orchestrator | + sleep 5 2026-02-18 03:03:39.229173 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:39.273561 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:39.273644 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:39.273654 | orchestrator | + sleep 5 2026-02-18 03:03:44.278397 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:44.329447 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:44.329543 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:44.329558 | orchestrator | + sleep 5 2026-02-18 03:03:49.334475 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:49.367085 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:49.367297 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:49.367330 | orchestrator | + sleep 5 2026-02-18 03:03:54.371357 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:54.413376 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:54.413498 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:54.413527 | orchestrator | + sleep 5 2026-02-18 03:03:59.418823 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:03:59.458918 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-18 03:03:59.459009 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:03:59.459023 | orchestrator | + sleep 5 2026-02-18 03:04:04.464022 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:04:04.503129 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-18 03:04:04.503224 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:04:04.503240 | orchestrator | + sleep 5 2026-02-18 03:04:09.507360 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:04:09.548521 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-18 03:04:09.548616 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-18 03:04:09.548632 | orchestrator | + sleep 5 2026-02-18 03:04:14.552760 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-18 03:04:14.602370 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:04:14.602490 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-18 03:04:14.602515 | orchestrator | + local max_attempts=60 2026-02-18 03:04:14.602535 | orchestrator | + local name=kolla-ansible 2026-02-18 03:04:14.602550 | orchestrator | + local attempt_num=1 2026-02-18 03:04:14.602903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-18 03:04:14.642421 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:04:14.642523 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-18 03:04:14.642570 | orchestrator | + local max_attempts=60 2026-02-18 03:04:14.642582 | orchestrator | + local name=osism-ansible 2026-02-18 03:04:14.642594 | orchestrator | + local attempt_num=1 2026-02-18 03:04:14.642927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-18 03:04:14.682343 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 03:04:14.682457 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-18 03:04:14.682473 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-18 03:04:14.862010 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-18 03:04:15.023070 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-18 03:04:15.212761 | orchestrator | ARA in osism-ansible already disabled. 2026-02-18 03:04:15.376148 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-18 03:04:15.376837 | orchestrator | + osism apply gather-facts 2026-02-18 03:04:27.574221 | orchestrator | 2026-02-18 03:04:27 | INFO  | Task 3462051f-066d-44cf-8806-c4cbd82f6440 (gather-facts) was prepared for execution. 2026-02-18 03:04:27.574353 | orchestrator | 2026-02-18 03:04:27 | INFO  | It takes a moment until task 3462051f-066d-44cf-8806-c4cbd82f6440 (gather-facts) has been started and output is visible here. 2026-02-18 03:04:41.973651 | orchestrator | 2026-02-18 03:04:41.973867 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-18 03:04:41.973894 | orchestrator | 2026-02-18 03:04:41.973914 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 03:04:41.973933 | orchestrator | Wednesday 18 February 2026 03:04:31 +0000 (0:00:00.225) 0:00:00.225 **** 2026-02-18 03:04:41.973955 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:04:41.973976 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:04:41.973995 | orchestrator | ok: [testbed-manager] 2026-02-18 03:04:41.974013 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:04:41.974156 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:04:41.974178 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:04:41.974197 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:04:41.974217 | orchestrator | 2026-02-18 03:04:41.974239 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-18 03:04:41.974261 | orchestrator | 2026-02-18 03:04:41.974284 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-18 03:04:41.974306 | orchestrator | Wednesday 18 February 2026 03:04:40 +0000 (0:00:08.993) 0:00:09.218 **** 2026-02-18 03:04:41.974330 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:04:41.974353 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:04:41.974376 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:04:41.974398 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:04:41.974421 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:04:41.974444 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:04:41.974466 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:04:41.974489 | orchestrator | 2026-02-18 03:04:41.974508 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:04:41.974529 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:04:41.974550 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:04:41.974568 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:04:41.974587 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:04:41.974607 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:04:41.974625 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:04:41.974726 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:04:41.974747 | orchestrator | 2026-02-18 03:04:41.974766 | orchestrator | 2026-02-18 03:04:41.974785 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:04:41.974804 | orchestrator | Wednesday 18 February 2026 03:04:41 +0000 (0:00:00.615) 0:00:09.834 **** 2026-02-18 03:04:41.974822 | orchestrator | =============================================================================== 2026-02-18 03:04:41.974841 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.99s 2026-02-18 03:04:41.974860 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-02-18 03:04:42.355749 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-18 03:04:42.370590 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-18 03:04:42.390962 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-18 03:04:42.409134 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-18 03:04:42.425073 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-18 03:04:42.442379 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-18 03:04:42.457892 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-18 03:04:42.472097 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-18 03:04:42.485540 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-18 03:04:42.499311 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-18 03:04:42.511293 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-18 03:04:42.525180 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-18 03:04:42.540044 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-18 03:04:42.555774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-18 03:04:42.576848 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-18 03:04:42.593049 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-18 03:04:42.609882 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-18 03:04:42.624755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-18 03:04:42.642073 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-18 03:04:42.659766 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-18 03:04:42.671924 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-18 03:04:42.694013 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-18 03:04:42.713800 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-18 03:04:42.730274 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-18 03:04:42.833610 | orchestrator | ok: Runtime: 0:24:55.932915 2026-02-18 03:04:42.941640 | 2026-02-18 03:04:42.941793 | TASK [Deploy services] 2026-02-18 03:04:43.641879 | orchestrator | 2026-02-18 03:04:43.642105 | orchestrator | # DEPLOY SERVICES 2026-02-18 03:04:43.642135 | orchestrator | 2026-02-18 03:04:43.642149 | orchestrator | + set -e 2026-02-18 03:04:43.642162 | orchestrator | + echo 2026-02-18 03:04:43.642175 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-18 03:04:43.642189 | orchestrator | + echo 2026-02-18 03:04:43.642232 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 03:04:43.642255 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 03:04:43.642269 | orchestrator | ++ INTERACTIVE=false 2026-02-18 03:04:43.642280 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 03:04:43.642302 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 03:04:43.642313 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 03:04:43.642328 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 03:04:43.642339 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 03:04:43.642356 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 03:04:43.642367 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 03:04:43.642382 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 03:04:43.642393 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 03:04:43.642409 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 03:04:43.642420 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 03:04:43.642431 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 03:04:43.642443 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 03:04:43.642453 | orchestrator | ++ export ARA=false 2026-02-18 03:04:43.642465 | orchestrator | ++ ARA=false 2026-02-18 03:04:43.642475 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 03:04:43.642486 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 03:04:43.642497 | orchestrator | ++ export TEMPEST=false 2026-02-18 03:04:43.642508 | orchestrator | ++ TEMPEST=false 2026-02-18 03:04:43.642518 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 03:04:43.642529 | orchestrator | ++ IS_ZUUL=true 2026-02-18 03:04:43.642540 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 03:04:43.642551 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 03:04:43.642562 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 03:04:43.642573 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 03:04:43.642584 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 03:04:43.642594 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 03:04:43.642605 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 03:04:43.642615 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 03:04:43.642626 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 03:04:43.642644 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 03:04:43.642679 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-18 03:04:43.654074 | orchestrator | 2026-02-18 03:04:43.654152 | orchestrator | # PULL IMAGES 2026-02-18 03:04:43.654162 | orchestrator | 2026-02-18 03:04:43.654169 | orchestrator | + set -e 2026-02-18 03:04:43.654176 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 03:04:43.654186 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 03:04:43.654192 | orchestrator | ++ INTERACTIVE=false 2026-02-18 03:04:43.654199 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 03:04:43.654205 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 03:04:43.654222 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 03:04:43.654229 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 03:04:43.654235 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 03:04:43.654249 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 03:04:43.654255 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 03:04:43.654262 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 03:04:43.654268 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 03:04:43.654274 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 03:04:43.654281 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 03:04:43.654287 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 03:04:43.654294 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 03:04:43.654300 | orchestrator | ++ export ARA=false 2026-02-18 03:04:43.654306 | orchestrator | ++ ARA=false 2026-02-18 03:04:43.654316 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 03:04:43.654322 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 03:04:43.654328 | orchestrator | ++ export TEMPEST=false 2026-02-18 03:04:43.654335 | orchestrator | ++ TEMPEST=false 2026-02-18 03:04:43.654341 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 03:04:43.654347 | orchestrator | ++ IS_ZUUL=true 2026-02-18 03:04:43.654353 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 03:04:43.654360 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 03:04:43.654366 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 03:04:43.654372 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 03:04:43.654378 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 03:04:43.654384 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 03:04:43.654415 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 03:04:43.654422 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 03:04:43.654428 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 03:04:43.654434 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 03:04:43.654441 | orchestrator | + echo 2026-02-18 03:04:43.654447 | orchestrator | + echo '# PULL IMAGES' 2026-02-18 03:04:43.654453 | orchestrator | + echo 2026-02-18 03:04:43.655492 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-18 03:04:43.719938 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 03:04:43.720052 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-18 03:04:45.767942 | orchestrator | 2026-02-18 03:04:45 | INFO  | Trying to run play pull-images in environment custom 2026-02-18 03:04:55.905807 | orchestrator | 2026-02-18 03:04:55 | INFO  | Task 95821a31-a32a-447a-9f11-3426a991d0cc (pull-images) was prepared for execution. 2026-02-18 03:04:55.905921 | orchestrator | 2026-02-18 03:04:55 | INFO  | Task 95821a31-a32a-447a-9f11-3426a991d0cc is running in background. No more output. Check ARA for logs. 2026-02-18 03:04:56.262243 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-18 03:05:08.456313 | orchestrator | 2026-02-18 03:05:08 | INFO  | Task 814db1d1-236c-48c1-a91d-3d19e90c7e87 (cgit) was prepared for execution. 2026-02-18 03:05:08.456452 | orchestrator | 2026-02-18 03:05:08 | INFO  | Task 814db1d1-236c-48c1-a91d-3d19e90c7e87 is running in background. No more output. Check ARA for logs. 2026-02-18 03:05:21.313964 | orchestrator | 2026-02-18 03:05:21 | INFO  | Task a2b912ec-c650-41bb-88c0-119705b7175c (dotfiles) was prepared for execution. 2026-02-18 03:05:21.314157 | orchestrator | 2026-02-18 03:05:21 | INFO  | Task a2b912ec-c650-41bb-88c0-119705b7175c is running in background. No more output. Check ARA for logs. 2026-02-18 03:05:33.975246 | orchestrator | 2026-02-18 03:05:33 | INFO  | Task 972b75a4-0045-454e-b75a-5c0b48ea2521 (homer) was prepared for execution. 2026-02-18 03:05:33.975321 | orchestrator | 2026-02-18 03:05:33 | INFO  | Task 972b75a4-0045-454e-b75a-5c0b48ea2521 is running in background. No more output. Check ARA for logs. 2026-02-18 03:05:46.694450 | orchestrator | 2026-02-18 03:05:46 | INFO  | Task 2cecef6c-74e4-42a1-98e4-77980b5f4cdf (phpmyadmin) was prepared for execution. 2026-02-18 03:05:46.694659 | orchestrator | 2026-02-18 03:05:46 | INFO  | Task 2cecef6c-74e4-42a1-98e4-77980b5f4cdf is running in background. No more output. Check ARA for logs. 2026-02-18 03:05:59.449104 | orchestrator | 2026-02-18 03:05:59 | INFO  | Task 83c906fb-62ea-4661-968f-60ad497d0afd (sosreport) was prepared for execution. 2026-02-18 03:05:59.449214 | orchestrator | 2026-02-18 03:05:59 | INFO  | Task 83c906fb-62ea-4661-968f-60ad497d0afd is running in background. No more output. Check ARA for logs. 2026-02-18 03:05:59.830858 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-18 03:05:59.839682 | orchestrator | + set -e 2026-02-18 03:05:59.839824 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 03:05:59.839851 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 03:05:59.839872 | orchestrator | ++ INTERACTIVE=false 2026-02-18 03:05:59.839893 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 03:05:59.839912 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 03:05:59.839931 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 03:05:59.839948 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 03:05:59.839966 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 03:05:59.839984 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 03:05:59.840003 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 03:05:59.840021 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 03:05:59.840033 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 03:05:59.840049 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 03:05:59.840067 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 03:05:59.840085 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 03:05:59.840104 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 03:05:59.840121 | orchestrator | ++ export ARA=false 2026-02-18 03:05:59.840140 | orchestrator | ++ ARA=false 2026-02-18 03:05:59.840159 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 03:05:59.840214 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 03:05:59.840233 | orchestrator | ++ export TEMPEST=false 2026-02-18 03:05:59.840252 | orchestrator | ++ TEMPEST=false 2026-02-18 03:05:59.840268 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 03:05:59.840287 | orchestrator | ++ IS_ZUUL=true 2026-02-18 03:05:59.840327 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 03:05:59.840353 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 03:05:59.840371 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 03:05:59.840387 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 03:05:59.840404 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 03:05:59.840424 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 03:05:59.840465 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 03:05:59.840501 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 03:05:59.840518 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 03:05:59.840535 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 03:05:59.840732 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-18 03:05:59.921211 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 03:05:59.921306 | orchestrator | + osism apply frr 2026-02-18 03:06:12.193704 | orchestrator | 2026-02-18 03:06:12 | INFO  | Task f75506df-b448-4203-9473-76c315b8d246 (frr) was prepared for execution. 2026-02-18 03:06:12.193812 | orchestrator | 2026-02-18 03:06:12 | INFO  | It takes a moment until task f75506df-b448-4203-9473-76c315b8d246 (frr) has been started and output is visible here. 2026-02-18 03:06:49.046322 | orchestrator | 2026-02-18 03:06:49.046416 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-18 03:06:49.046426 | orchestrator | 2026-02-18 03:06:49.046433 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-18 03:06:49.046445 | orchestrator | Wednesday 18 February 2026 03:06:19 +0000 (0:00:00.272) 0:00:00.272 **** 2026-02-18 03:06:49.046451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 03:06:49.046459 | orchestrator | 2026-02-18 03:06:49.046465 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-18 03:06:49.046473 | orchestrator | Wednesday 18 February 2026 03:06:19 +0000 (0:00:00.363) 0:00:00.635 **** 2026-02-18 03:06:49.046483 | orchestrator | changed: [testbed-manager] 2026-02-18 03:06:49.046493 | orchestrator | 2026-02-18 03:06:49.046503 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-18 03:06:49.046515 | orchestrator | Wednesday 18 February 2026 03:06:21 +0000 (0:00:01.798) 0:00:02.434 **** 2026-02-18 03:06:49.046526 | orchestrator | changed: [testbed-manager] 2026-02-18 03:06:49.046575 | orchestrator | 2026-02-18 03:06:49.046582 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-18 03:06:49.046589 | orchestrator | Wednesday 18 February 2026 03:06:37 +0000 (0:00:16.166) 0:00:18.600 **** 2026-02-18 03:06:49.046595 | orchestrator | ok: [testbed-manager] 2026-02-18 03:06:49.046601 | orchestrator | 2026-02-18 03:06:49.046607 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-18 03:06:49.046614 | orchestrator | Wednesday 18 February 2026 03:06:38 +0000 (0:00:01.050) 0:00:19.651 **** 2026-02-18 03:06:49.046620 | orchestrator | changed: [testbed-manager] 2026-02-18 03:06:49.046626 | orchestrator | 2026-02-18 03:06:49.046632 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-18 03:06:49.046638 | orchestrator | Wednesday 18 February 2026 03:06:39 +0000 (0:00:00.933) 0:00:20.585 **** 2026-02-18 03:06:49.046643 | orchestrator | ok: [testbed-manager] 2026-02-18 03:06:49.046649 | orchestrator | 2026-02-18 03:06:49.046655 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-18 03:06:49.046662 | orchestrator | Wednesday 18 February 2026 03:06:41 +0000 (0:00:01.415) 0:00:22.000 **** 2026-02-18 03:06:49.046668 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:06:49.046674 | orchestrator | 2026-02-18 03:06:49.046680 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-18 03:06:49.046686 | orchestrator | Wednesday 18 February 2026 03:06:41 +0000 (0:00:00.169) 0:00:22.170 **** 2026-02-18 03:06:49.046709 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:06:49.046716 | orchestrator | 2026-02-18 03:06:49.046722 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-18 03:06:49.046728 | orchestrator | Wednesday 18 February 2026 03:06:41 +0000 (0:00:00.204) 0:00:22.374 **** 2026-02-18 03:06:49.046734 | orchestrator | changed: [testbed-manager] 2026-02-18 03:06:49.046740 | orchestrator | 2026-02-18 03:06:49.046745 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-18 03:06:49.046751 | orchestrator | Wednesday 18 February 2026 03:06:42 +0000 (0:00:01.193) 0:00:23.567 **** 2026-02-18 03:06:49.046757 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-18 03:06:49.046763 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-18 03:06:49.046770 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-18 03:06:49.046776 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-18 03:06:49.046781 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-18 03:06:49.046787 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-18 03:06:49.046793 | orchestrator | 2026-02-18 03:06:49.046799 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-18 03:06:49.046804 | orchestrator | Wednesday 18 February 2026 03:06:45 +0000 (0:00:02.470) 0:00:26.038 **** 2026-02-18 03:06:49.046810 | orchestrator | ok: [testbed-manager] 2026-02-18 03:06:49.046816 | orchestrator | 2026-02-18 03:06:49.046822 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-18 03:06:49.046828 | orchestrator | Wednesday 18 February 2026 03:06:47 +0000 (0:00:01.814) 0:00:27.853 **** 2026-02-18 03:06:49.046838 | orchestrator | changed: [testbed-manager] 2026-02-18 03:06:49.046848 | orchestrator | 2026-02-18 03:06:49.046858 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:06:49.046869 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:06:49.046879 | orchestrator | 2026-02-18 03:06:49.046888 | orchestrator | 2026-02-18 03:06:49.046904 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:06:49.046913 | orchestrator | Wednesday 18 February 2026 03:06:48 +0000 (0:00:01.518) 0:00:29.371 **** 2026-02-18 03:06:49.046922 | orchestrator | =============================================================================== 2026-02-18 03:06:49.046931 | orchestrator | osism.services.frr : Install frr package ------------------------------- 16.17s 2026-02-18 03:06:49.046939 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.47s 2026-02-18 03:06:49.046948 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.81s 2026-02-18 03:06:49.046957 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.80s 2026-02-18 03:06:49.046967 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.52s 2026-02-18 03:06:49.046993 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.42s 2026-02-18 03:06:49.047003 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.19s 2026-02-18 03:06:49.047013 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.05s 2026-02-18 03:06:49.047023 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.93s 2026-02-18 03:06:49.047033 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.36s 2026-02-18 03:06:49.047043 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.20s 2026-02-18 03:06:49.047054 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-02-18 03:06:49.430492 | orchestrator | + osism apply kubernetes 2026-02-18 03:06:51.760904 | orchestrator | 2026-02-18 03:06:51 | INFO  | Task 30804acc-7d51-4179-8fb0-6cbe5704c1f6 (kubernetes) was prepared for execution. 2026-02-18 03:06:51.761012 | orchestrator | 2026-02-18 03:06:51 | INFO  | It takes a moment until task 30804acc-7d51-4179-8fb0-6cbe5704c1f6 (kubernetes) has been started and output is visible here. 2026-02-18 03:07:17.076768 | orchestrator | 2026-02-18 03:07:17.076880 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-18 03:07:17.076896 | orchestrator | 2026-02-18 03:07:17.076907 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-18 03:07:17.076919 | orchestrator | Wednesday 18 February 2026 03:06:57 +0000 (0:00:00.218) 0:00:00.218 **** 2026-02-18 03:07:17.076931 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:07:17.076943 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:07:17.076953 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:07:17.076965 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:07:17.076976 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:07:17.076987 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:07:17.076997 | orchestrator | 2026-02-18 03:07:17.077008 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-18 03:07:17.077019 | orchestrator | Wednesday 18 February 2026 03:06:57 +0000 (0:00:00.816) 0:00:01.034 **** 2026-02-18 03:07:17.077030 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.077042 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.077052 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.077063 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.077073 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.077084 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.077095 | orchestrator | 2026-02-18 03:07:17.077106 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-18 03:07:17.077119 | orchestrator | Wednesday 18 February 2026 03:06:58 +0000 (0:00:00.625) 0:00:01.659 **** 2026-02-18 03:07:17.077130 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.077140 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.077151 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.077162 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.077173 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.077183 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.077194 | orchestrator | 2026-02-18 03:07:17.077210 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-18 03:07:17.077230 | orchestrator | Wednesday 18 February 2026 03:06:59 +0000 (0:00:00.759) 0:00:02.419 **** 2026-02-18 03:07:17.077330 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:07:17.077351 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:07:17.077369 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:07:17.077393 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:07:17.077412 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:07:17.077430 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:07:17.077447 | orchestrator | 2026-02-18 03:07:17.077466 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-18 03:07:17.077486 | orchestrator | Wednesday 18 February 2026 03:07:01 +0000 (0:00:02.039) 0:00:04.458 **** 2026-02-18 03:07:17.077504 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:07:17.077549 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:07:17.077567 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:07:17.077587 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:07:17.077605 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:07:17.077623 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:07:17.077641 | orchestrator | 2026-02-18 03:07:17.077660 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-18 03:07:17.077680 | orchestrator | Wednesday 18 February 2026 03:07:02 +0000 (0:00:01.425) 0:00:05.884 **** 2026-02-18 03:07:17.077698 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:07:17.077745 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:07:17.077757 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:07:17.077768 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:07:17.077779 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:07:17.077789 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:07:17.077800 | orchestrator | 2026-02-18 03:07:17.077921 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-18 03:07:17.077938 | orchestrator | Wednesday 18 February 2026 03:07:03 +0000 (0:00:01.224) 0:00:07.109 **** 2026-02-18 03:07:17.077949 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.077963 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.077983 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.078003 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.078106 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.078128 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.078147 | orchestrator | 2026-02-18 03:07:17.078167 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-18 03:07:17.078189 | orchestrator | Wednesday 18 February 2026 03:07:04 +0000 (0:00:00.853) 0:00:07.962 **** 2026-02-18 03:07:17.078209 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.078229 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.078249 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.078268 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.078286 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.078300 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.078310 | orchestrator | 2026-02-18 03:07:17.078322 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-18 03:07:17.078332 | orchestrator | Wednesday 18 February 2026 03:07:05 +0000 (0:00:00.674) 0:00:08.637 **** 2026-02-18 03:07:17.078343 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 03:07:17.078355 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 03:07:17.078366 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.078376 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 03:07:17.078387 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 03:07:17.078397 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.078408 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 03:07:17.078418 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 03:07:17.078429 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.078440 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 03:07:17.078476 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 03:07:17.078488 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.078500 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 03:07:17.078559 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 03:07:17.078571 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.078582 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 03:07:17.078593 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 03:07:17.078604 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.078614 | orchestrator | 2026-02-18 03:07:17.078626 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-18 03:07:17.078636 | orchestrator | Wednesday 18 February 2026 03:07:06 +0000 (0:00:00.645) 0:00:09.282 **** 2026-02-18 03:07:17.078647 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.078658 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.078668 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.078693 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.078704 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.078714 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.078725 | orchestrator | 2026-02-18 03:07:17.078736 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-18 03:07:17.078748 | orchestrator | Wednesday 18 February 2026 03:07:07 +0000 (0:00:01.353) 0:00:10.636 **** 2026-02-18 03:07:17.078758 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:07:17.078769 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:07:17.078780 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:07:17.078790 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:07:17.078801 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:07:17.078860 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:07:17.078872 | orchestrator | 2026-02-18 03:07:17.078884 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-18 03:07:17.078895 | orchestrator | Wednesday 18 February 2026 03:07:08 +0000 (0:00:00.780) 0:00:11.416 **** 2026-02-18 03:07:17.078905 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:07:17.078916 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:07:17.078927 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:07:17.078938 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:07:17.078948 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:07:17.078959 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:07:17.078970 | orchestrator | 2026-02-18 03:07:17.078981 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-18 03:07:17.078992 | orchestrator | Wednesday 18 February 2026 03:07:13 +0000 (0:00:05.205) 0:00:16.622 **** 2026-02-18 03:07:17.079002 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.079022 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.079033 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.079044 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.079061 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.079079 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.079095 | orchestrator | 2026-02-18 03:07:17.079114 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-18 03:07:17.079132 | orchestrator | Wednesday 18 February 2026 03:07:14 +0000 (0:00:00.861) 0:00:17.484 **** 2026-02-18 03:07:17.079151 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.079169 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.079189 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.079211 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.079231 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.079247 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.079259 | orchestrator | 2026-02-18 03:07:17.079269 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-18 03:07:17.079282 | orchestrator | Wednesday 18 February 2026 03:07:15 +0000 (0:00:01.225) 0:00:18.709 **** 2026-02-18 03:07:17.079292 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.079303 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.079313 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.079324 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.079334 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.079345 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.079355 | orchestrator | 2026-02-18 03:07:17.079366 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-18 03:07:17.079377 | orchestrator | Wednesday 18 February 2026 03:07:16 +0000 (0:00:00.638) 0:00:19.347 **** 2026-02-18 03:07:17.079388 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-18 03:07:17.079405 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-18 03:07:17.079416 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:07:17.079427 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-18 03:07:17.079448 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-18 03:07:17.079459 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:07:17.079469 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-18 03:07:17.079480 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-18 03:07:17.079491 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:07:17.079501 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-18 03:07:17.079680 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-18 03:07:17.079710 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:07:17.079731 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-18 03:07:17.079750 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-18 03:07:17.079770 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:07:17.079792 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-18 03:07:17.079812 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-18 03:07:17.079832 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:07:17.079844 | orchestrator | 2026-02-18 03:07:17.079855 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-18 03:07:17.079880 | orchestrator | Wednesday 18 February 2026 03:07:17 +0000 (0:00:00.836) 0:00:20.183 **** 2026-02-18 03:08:33.080431 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:08:33.080559 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:08:33.080568 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:08:33.080575 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:08:33.080581 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.080586 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.080592 | orchestrator | 2026-02-18 03:08:33.080599 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-18 03:08:33.080607 | orchestrator | Wednesday 18 February 2026 03:07:17 +0000 (0:00:00.582) 0:00:20.766 **** 2026-02-18 03:08:33.080613 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:08:33.080618 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:08:33.080624 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:08:33.080629 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:08:33.080635 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.080640 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.080646 | orchestrator | 2026-02-18 03:08:33.080651 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-18 03:08:33.080657 | orchestrator | 2026-02-18 03:08:33.080663 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-18 03:08:33.080669 | orchestrator | Wednesday 18 February 2026 03:07:18 +0000 (0:00:01.285) 0:00:22.052 **** 2026-02-18 03:08:33.080674 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.080680 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.080686 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.080691 | orchestrator | 2026-02-18 03:08:33.080697 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-18 03:08:33.080702 | orchestrator | Wednesday 18 February 2026 03:07:20 +0000 (0:00:01.887) 0:00:23.939 **** 2026-02-18 03:08:33.080708 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.080713 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.080718 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.080724 | orchestrator | 2026-02-18 03:08:33.080729 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-18 03:08:33.080735 | orchestrator | Wednesday 18 February 2026 03:07:22 +0000 (0:00:01.295) 0:00:25.235 **** 2026-02-18 03:08:33.080740 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.080746 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.080751 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.080757 | orchestrator | 2026-02-18 03:08:33.080763 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-18 03:08:33.080784 | orchestrator | Wednesday 18 February 2026 03:07:22 +0000 (0:00:00.827) 0:00:26.062 **** 2026-02-18 03:08:33.080789 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.080795 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.080800 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.080806 | orchestrator | 2026-02-18 03:08:33.080811 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-18 03:08:33.080816 | orchestrator | Wednesday 18 February 2026 03:07:23 +0000 (0:00:00.787) 0:00:26.850 **** 2026-02-18 03:08:33.080822 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:08:33.080827 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.080833 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.080838 | orchestrator | 2026-02-18 03:08:33.080844 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-18 03:08:33.080861 | orchestrator | Wednesday 18 February 2026 03:07:24 +0000 (0:00:00.392) 0:00:27.243 **** 2026-02-18 03:08:33.080867 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:08:33.080873 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:08:33.080878 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:08:33.080883 | orchestrator | 2026-02-18 03:08:33.080889 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-18 03:08:33.080894 | orchestrator | Wednesday 18 February 2026 03:07:25 +0000 (0:00:01.130) 0:00:28.374 **** 2026-02-18 03:08:33.080900 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:08:33.080905 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:08:33.080910 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:08:33.080916 | orchestrator | 2026-02-18 03:08:33.080921 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-18 03:08:33.080927 | orchestrator | Wednesday 18 February 2026 03:07:26 +0000 (0:00:01.273) 0:00:29.647 **** 2026-02-18 03:08:33.080932 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:08:33.080938 | orchestrator | 2026-02-18 03:08:33.080943 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-18 03:08:33.080949 | orchestrator | Wednesday 18 February 2026 03:07:27 +0000 (0:00:00.592) 0:00:30.240 **** 2026-02-18 03:08:33.080954 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.080960 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.080965 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.080970 | orchestrator | 2026-02-18 03:08:33.080976 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-18 03:08:33.080981 | orchestrator | Wednesday 18 February 2026 03:07:29 +0000 (0:00:02.281) 0:00:32.522 **** 2026-02-18 03:08:33.080987 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.080992 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.080998 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:08:33.081003 | orchestrator | 2026-02-18 03:08:33.081009 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-18 03:08:33.081014 | orchestrator | Wednesday 18 February 2026 03:07:29 +0000 (0:00:00.563) 0:00:33.086 **** 2026-02-18 03:08:33.081019 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.081025 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.081030 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:08:33.081036 | orchestrator | 2026-02-18 03:08:33.081041 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-18 03:08:33.081046 | orchestrator | Wednesday 18 February 2026 03:07:31 +0000 (0:00:01.308) 0:00:34.394 **** 2026-02-18 03:08:33.081052 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.081057 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.081062 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:08:33.081068 | orchestrator | 2026-02-18 03:08:33.081073 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-18 03:08:33.081090 | orchestrator | Wednesday 18 February 2026 03:07:32 +0000 (0:00:01.239) 0:00:35.633 **** 2026-02-18 03:08:33.081096 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:08:33.081106 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.081111 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.081117 | orchestrator | 2026-02-18 03:08:33.081122 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-18 03:08:33.081128 | orchestrator | Wednesday 18 February 2026 03:07:33 +0000 (0:00:00.589) 0:00:36.223 **** 2026-02-18 03:08:33.081133 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:08:33.081138 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.081144 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.081149 | orchestrator | 2026-02-18 03:08:33.081155 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-18 03:08:33.081160 | orchestrator | Wednesday 18 February 2026 03:07:33 +0000 (0:00:00.307) 0:00:36.531 **** 2026-02-18 03:08:33.081165 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:08:33.081173 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:08:33.081182 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:08:33.081192 | orchestrator | 2026-02-18 03:08:33.081206 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-18 03:08:33.081212 | orchestrator | Wednesday 18 February 2026 03:07:34 +0000 (0:00:01.226) 0:00:37.757 **** 2026-02-18 03:08:33.081217 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.081222 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.081228 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.081233 | orchestrator | 2026-02-18 03:08:33.081238 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-18 03:08:33.081244 | orchestrator | Wednesday 18 February 2026 03:07:37 +0000 (0:00:02.819) 0:00:40.576 **** 2026-02-18 03:08:33.081249 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.081254 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.081259 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.081267 | orchestrator | 2026-02-18 03:08:33.081273 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-18 03:08:33.081279 | orchestrator | Wednesday 18 February 2026 03:07:37 +0000 (0:00:00.352) 0:00:40.929 **** 2026-02-18 03:08:33.081284 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-18 03:08:33.081291 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-18 03:08:33.081297 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-18 03:08:33.081302 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-18 03:08:33.081308 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-18 03:08:33.081313 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-18 03:08:33.081318 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-18 03:08:33.081323 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-18 03:08:33.081329 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-18 03:08:33.081334 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-18 03:08:33.081339 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-18 03:08:33.081350 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-18 03:08:33.081356 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-18 03:08:33.081361 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-18 03:08:33.081366 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-18 03:08:33.081372 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:08:33.081377 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:08:33.081382 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:08:33.081387 | orchestrator | 2026-02-18 03:08:33.081397 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-18 03:08:33.081403 | orchestrator | Wednesday 18 February 2026 03:08:31 +0000 (0:00:53.974) 0:01:34.904 **** 2026-02-18 03:08:33.081408 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:08:33.081413 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:08:33.081418 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:08:33.081424 | orchestrator | 2026-02-18 03:08:33.081429 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-18 03:08:33.081435 | orchestrator | Wednesday 18 February 2026 03:08:32 +0000 (0:00:00.322) 0:01:35.226 **** 2026-02-18 03:08:33.081444 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.645827 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.645951 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.645969 | orchestrator | 2026-02-18 03:09:16.645981 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-18 03:09:16.645994 | orchestrator | Wednesday 18 February 2026 03:08:33 +0000 (0:00:00.974) 0:01:36.201 **** 2026-02-18 03:09:16.646002 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.646012 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.646082 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.646092 | orchestrator | 2026-02-18 03:09:16.646102 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-18 03:09:16.646112 | orchestrator | Wednesday 18 February 2026 03:08:34 +0000 (0:00:01.178) 0:01:37.380 **** 2026-02-18 03:09:16.646121 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.646131 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.646140 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.646149 | orchestrator | 2026-02-18 03:09:16.646158 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-18 03:09:16.646167 | orchestrator | Wednesday 18 February 2026 03:09:01 +0000 (0:00:27.187) 0:02:04.567 **** 2026-02-18 03:09:16.646176 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:09:16.646187 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:09:16.646197 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:09:16.646207 | orchestrator | 2026-02-18 03:09:16.646217 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-18 03:09:16.646227 | orchestrator | Wednesday 18 February 2026 03:09:02 +0000 (0:00:00.641) 0:02:05.209 **** 2026-02-18 03:09:16.646237 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:09:16.646247 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:09:16.646257 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:09:16.646267 | orchestrator | 2026-02-18 03:09:16.646277 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-18 03:09:16.646287 | orchestrator | Wednesday 18 February 2026 03:09:02 +0000 (0:00:00.621) 0:02:05.831 **** 2026-02-18 03:09:16.646297 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.646307 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.646317 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.646327 | orchestrator | 2026-02-18 03:09:16.646337 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-18 03:09:16.646370 | orchestrator | Wednesday 18 February 2026 03:09:03 +0000 (0:00:00.659) 0:02:06.491 **** 2026-02-18 03:09:16.646381 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:09:16.646391 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:09:16.646401 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:09:16.646411 | orchestrator | 2026-02-18 03:09:16.646421 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-18 03:09:16.646431 | orchestrator | Wednesday 18 February 2026 03:09:04 +0000 (0:00:00.872) 0:02:07.363 **** 2026-02-18 03:09:16.646441 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:09:16.646509 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:09:16.646521 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:09:16.646532 | orchestrator | 2026-02-18 03:09:16.646544 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-18 03:09:16.646556 | orchestrator | Wednesday 18 February 2026 03:09:04 +0000 (0:00:00.308) 0:02:07.672 **** 2026-02-18 03:09:16.646569 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.646579 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.646589 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.646599 | orchestrator | 2026-02-18 03:09:16.646609 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-18 03:09:16.646619 | orchestrator | Wednesday 18 February 2026 03:09:05 +0000 (0:00:00.638) 0:02:08.310 **** 2026-02-18 03:09:16.646628 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.646640 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.646653 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.646663 | orchestrator | 2026-02-18 03:09:16.646674 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-18 03:09:16.646685 | orchestrator | Wednesday 18 February 2026 03:09:05 +0000 (0:00:00.648) 0:02:08.958 **** 2026-02-18 03:09:16.646695 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.646706 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.646719 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.646730 | orchestrator | 2026-02-18 03:09:16.646740 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-18 03:09:16.646749 | orchestrator | Wednesday 18 February 2026 03:09:06 +0000 (0:00:01.072) 0:02:10.031 **** 2026-02-18 03:09:16.646761 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:09:16.646769 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:09:16.646778 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:09:16.646787 | orchestrator | 2026-02-18 03:09:16.646796 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-18 03:09:16.646805 | orchestrator | Wednesday 18 February 2026 03:09:07 +0000 (0:00:00.795) 0:02:10.827 **** 2026-02-18 03:09:16.646815 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:09:16.646824 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:09:16.646834 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:09:16.646843 | orchestrator | 2026-02-18 03:09:16.646851 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-18 03:09:16.646860 | orchestrator | Wednesday 18 February 2026 03:09:07 +0000 (0:00:00.292) 0:02:11.119 **** 2026-02-18 03:09:16.646870 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:09:16.646878 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:09:16.646888 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:09:16.646897 | orchestrator | 2026-02-18 03:09:16.646906 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-18 03:09:16.646916 | orchestrator | Wednesday 18 February 2026 03:09:08 +0000 (0:00:00.385) 0:02:11.505 **** 2026-02-18 03:09:16.646926 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:09:16.646935 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:09:16.646945 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:09:16.646954 | orchestrator | 2026-02-18 03:09:16.646965 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-18 03:09:16.646974 | orchestrator | Wednesday 18 February 2026 03:09:09 +0000 (0:00:00.646) 0:02:12.151 **** 2026-02-18 03:09:16.646997 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:09:16.647007 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:09:16.647039 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:09:16.647050 | orchestrator | 2026-02-18 03:09:16.647061 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-18 03:09:16.647072 | orchestrator | Wednesday 18 February 2026 03:09:09 +0000 (0:00:00.885) 0:02:13.037 **** 2026-02-18 03:09:16.647082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-18 03:09:16.647092 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-18 03:09:16.647102 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-18 03:09:16.647113 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-18 03:09:16.647122 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-18 03:09:16.647132 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-18 03:09:16.647142 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-18 03:09:16.647152 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-18 03:09:16.647162 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-18 03:09:16.647171 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-18 03:09:16.647182 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-18 03:09:16.647192 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-18 03:09:16.647203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-18 03:09:16.647212 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-18 03:09:16.647222 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-18 03:09:16.647231 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-18 03:09:16.647241 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-18 03:09:16.647251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-18 03:09:16.647261 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-18 03:09:16.647270 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-18 03:09:16.647280 | orchestrator | 2026-02-18 03:09:16.647289 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-18 03:09:16.647299 | orchestrator | 2026-02-18 03:09:16.647309 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-18 03:09:16.647319 | orchestrator | Wednesday 18 February 2026 03:09:12 +0000 (0:00:02.972) 0:02:16.009 **** 2026-02-18 03:09:16.647329 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:09:16.647339 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:09:16.647349 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:09:16.647358 | orchestrator | 2026-02-18 03:09:16.647385 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-18 03:09:16.647395 | orchestrator | Wednesday 18 February 2026 03:09:13 +0000 (0:00:00.367) 0:02:16.377 **** 2026-02-18 03:09:16.647405 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:09:16.647415 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:09:16.647425 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:09:16.647467 | orchestrator | 2026-02-18 03:09:16.647477 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-18 03:09:16.647486 | orchestrator | Wednesday 18 February 2026 03:09:14 +0000 (0:00:01.472) 0:02:17.850 **** 2026-02-18 03:09:16.647495 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:09:16.647504 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:09:16.647513 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:09:16.647523 | orchestrator | 2026-02-18 03:09:16.647533 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-18 03:09:16.647543 | orchestrator | Wednesday 18 February 2026 03:09:15 +0000 (0:00:00.357) 0:02:18.207 **** 2026-02-18 03:09:16.647553 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:09:16.647563 | orchestrator | 2026-02-18 03:09:16.647573 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-18 03:09:16.647583 | orchestrator | Wednesday 18 February 2026 03:09:15 +0000 (0:00:00.503) 0:02:18.711 **** 2026-02-18 03:09:16.647592 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:09:16.647602 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:09:16.647611 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:09:16.647620 | orchestrator | 2026-02-18 03:09:16.647629 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-18 03:09:16.647639 | orchestrator | Wednesday 18 February 2026 03:09:16 +0000 (0:00:00.537) 0:02:19.249 **** 2026-02-18 03:09:16.647649 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:09:16.647659 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:09:16.647667 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:09:16.647676 | orchestrator | 2026-02-18 03:09:16.647685 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-18 03:09:16.647695 | orchestrator | Wednesday 18 February 2026 03:09:16 +0000 (0:00:00.336) 0:02:19.586 **** 2026-02-18 03:09:16.647716 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:10:56.720105 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:10:56.720225 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:10:56.720240 | orchestrator | 2026-02-18 03:10:56.720251 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-18 03:10:56.720263 | orchestrator | Wednesday 18 February 2026 03:09:16 +0000 (0:00:00.318) 0:02:19.905 **** 2026-02-18 03:10:56.720273 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:10:56.720283 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:10:56.720292 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:10:56.720302 | orchestrator | 2026-02-18 03:10:56.720312 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-18 03:10:56.720321 | orchestrator | Wednesday 18 February 2026 03:09:17 +0000 (0:00:00.665) 0:02:20.570 **** 2026-02-18 03:10:56.720331 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:10:56.720340 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:10:56.720350 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:10:56.720359 | orchestrator | 2026-02-18 03:10:56.720385 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-18 03:10:56.720396 | orchestrator | Wednesday 18 February 2026 03:09:18 +0000 (0:00:01.385) 0:02:21.956 **** 2026-02-18 03:10:56.720406 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:10:56.720443 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:10:56.720457 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:10:56.720467 | orchestrator | 2026-02-18 03:10:56.720477 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-18 03:10:56.720486 | orchestrator | Wednesday 18 February 2026 03:09:20 +0000 (0:00:01.212) 0:02:23.169 **** 2026-02-18 03:10:56.720496 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:10:56.720506 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:10:56.720515 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:10:56.720525 | orchestrator | 2026-02-18 03:10:56.720535 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-18 03:10:56.720562 | orchestrator | 2026-02-18 03:10:56.720572 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-18 03:10:56.720587 | orchestrator | Wednesday 18 February 2026 03:09:30 +0000 (0:00:10.384) 0:02:33.553 **** 2026-02-18 03:10:56.720607 | orchestrator | ok: [testbed-manager] 2026-02-18 03:10:56.720632 | orchestrator | 2026-02-18 03:10:56.720648 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-18 03:10:56.720664 | orchestrator | Wednesday 18 February 2026 03:09:31 +0000 (0:00:01.003) 0:02:34.557 **** 2026-02-18 03:10:56.720679 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.720695 | orchestrator | 2026-02-18 03:10:56.720710 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-18 03:10:56.720725 | orchestrator | Wednesday 18 February 2026 03:09:31 +0000 (0:00:00.463) 0:02:35.020 **** 2026-02-18 03:10:56.720741 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-18 03:10:56.720758 | orchestrator | 2026-02-18 03:10:56.720774 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-18 03:10:56.720791 | orchestrator | Wednesday 18 February 2026 03:09:32 +0000 (0:00:00.524) 0:02:35.545 **** 2026-02-18 03:10:56.720810 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.720827 | orchestrator | 2026-02-18 03:10:56.720843 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-18 03:10:56.720856 | orchestrator | Wednesday 18 February 2026 03:09:33 +0000 (0:00:00.882) 0:02:36.428 **** 2026-02-18 03:10:56.720868 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.720879 | orchestrator | 2026-02-18 03:10:56.720889 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-18 03:10:56.720900 | orchestrator | Wednesday 18 February 2026 03:09:33 +0000 (0:00:00.646) 0:02:37.075 **** 2026-02-18 03:10:56.720912 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-18 03:10:56.720923 | orchestrator | 2026-02-18 03:10:56.720935 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-18 03:10:56.720946 | orchestrator | Wednesday 18 February 2026 03:09:35 +0000 (0:00:01.672) 0:02:38.747 **** 2026-02-18 03:10:56.720956 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-18 03:10:56.720967 | orchestrator | 2026-02-18 03:10:56.720996 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-18 03:10:56.721006 | orchestrator | Wednesday 18 February 2026 03:09:36 +0000 (0:00:00.978) 0:02:39.726 **** 2026-02-18 03:10:56.721015 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.721025 | orchestrator | 2026-02-18 03:10:56.721034 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-18 03:10:56.721043 | orchestrator | Wednesday 18 February 2026 03:09:37 +0000 (0:00:00.459) 0:02:40.186 **** 2026-02-18 03:10:56.721053 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.721062 | orchestrator | 2026-02-18 03:10:56.721088 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-18 03:10:56.721122 | orchestrator | 2026-02-18 03:10:56.721142 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-18 03:10:56.721158 | orchestrator | Wednesday 18 February 2026 03:09:37 +0000 (0:00:00.491) 0:02:40.677 **** 2026-02-18 03:10:56.721174 | orchestrator | ok: [testbed-manager] 2026-02-18 03:10:56.721190 | orchestrator | 2026-02-18 03:10:56.721204 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-18 03:10:56.721218 | orchestrator | Wednesday 18 February 2026 03:09:37 +0000 (0:00:00.388) 0:02:41.066 **** 2026-02-18 03:10:56.721231 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 03:10:56.721246 | orchestrator | 2026-02-18 03:10:56.721262 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-18 03:10:56.721277 | orchestrator | Wednesday 18 February 2026 03:09:38 +0000 (0:00:00.279) 0:02:41.345 **** 2026-02-18 03:10:56.721293 | orchestrator | ok: [testbed-manager] 2026-02-18 03:10:56.721308 | orchestrator | 2026-02-18 03:10:56.721339 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-18 03:10:56.721354 | orchestrator | Wednesday 18 February 2026 03:09:39 +0000 (0:00:00.838) 0:02:42.184 **** 2026-02-18 03:10:56.721370 | orchestrator | ok: [testbed-manager] 2026-02-18 03:10:56.721384 | orchestrator | 2026-02-18 03:10:56.721459 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-18 03:10:56.721476 | orchestrator | Wednesday 18 February 2026 03:09:40 +0000 (0:00:01.682) 0:02:43.867 **** 2026-02-18 03:10:56.721491 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.721506 | orchestrator | 2026-02-18 03:10:56.721521 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-18 03:10:56.721536 | orchestrator | Wednesday 18 February 2026 03:09:41 +0000 (0:00:00.850) 0:02:44.718 **** 2026-02-18 03:10:56.721550 | orchestrator | ok: [testbed-manager] 2026-02-18 03:10:56.721565 | orchestrator | 2026-02-18 03:10:56.721580 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-18 03:10:56.721596 | orchestrator | Wednesday 18 February 2026 03:09:42 +0000 (0:00:00.492) 0:02:45.210 **** 2026-02-18 03:10:56.721613 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.721629 | orchestrator | 2026-02-18 03:10:56.721645 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-18 03:10:56.721662 | orchestrator | Wednesday 18 February 2026 03:09:50 +0000 (0:00:07.953) 0:02:53.163 **** 2026-02-18 03:10:56.721678 | orchestrator | changed: [testbed-manager] 2026-02-18 03:10:56.721694 | orchestrator | 2026-02-18 03:10:56.721710 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-18 03:10:56.721727 | orchestrator | Wednesday 18 February 2026 03:10:03 +0000 (0:00:13.231) 0:03:06.395 **** 2026-02-18 03:10:56.721743 | orchestrator | ok: [testbed-manager] 2026-02-18 03:10:56.721759 | orchestrator | 2026-02-18 03:10:56.721774 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-18 03:10:56.721789 | orchestrator | 2026-02-18 03:10:56.721805 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-18 03:10:56.721822 | orchestrator | Wednesday 18 February 2026 03:10:04 +0000 (0:00:00.774) 0:03:07.169 **** 2026-02-18 03:10:56.721838 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:10:56.721853 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:10:56.721868 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:10:56.721884 | orchestrator | 2026-02-18 03:10:56.721900 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-18 03:10:56.721915 | orchestrator | Wednesday 18 February 2026 03:10:04 +0000 (0:00:00.338) 0:03:07.508 **** 2026-02-18 03:10:56.721931 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:10:56.721947 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:10:56.721964 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:10:56.721980 | orchestrator | 2026-02-18 03:10:56.721996 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-18 03:10:56.722154 | orchestrator | Wednesday 18 February 2026 03:10:04 +0000 (0:00:00.329) 0:03:07.837 **** 2026-02-18 03:10:56.722182 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:10:56.722193 | orchestrator | 2026-02-18 03:10:56.722203 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-18 03:10:56.722213 | orchestrator | Wednesday 18 February 2026 03:10:05 +0000 (0:00:00.740) 0:03:08.577 **** 2026-02-18 03:10:56.722222 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 03:10:56.722232 | orchestrator | 2026-02-18 03:10:56.722244 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-18 03:10:56.722260 | orchestrator | Wednesday 18 February 2026 03:10:06 +0000 (0:00:00.866) 0:03:09.444 **** 2026-02-18 03:10:56.722276 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 03:10:56.722292 | orchestrator | 2026-02-18 03:10:56.722306 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-18 03:10:56.722340 | orchestrator | Wednesday 18 February 2026 03:10:07 +0000 (0:00:00.896) 0:03:10.340 **** 2026-02-18 03:10:56.722357 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:10:56.722373 | orchestrator | 2026-02-18 03:10:56.722386 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-18 03:10:56.722396 | orchestrator | Wednesday 18 February 2026 03:10:07 +0000 (0:00:00.126) 0:03:10.466 **** 2026-02-18 03:10:56.722405 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 03:10:56.722502 | orchestrator | 2026-02-18 03:10:56.722517 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-18 03:10:56.722527 | orchestrator | Wednesday 18 February 2026 03:10:08 +0000 (0:00:00.993) 0:03:11.460 **** 2026-02-18 03:10:56.722537 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:10:56.722546 | orchestrator | 2026-02-18 03:10:56.722555 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-18 03:10:56.722565 | orchestrator | Wednesday 18 February 2026 03:10:08 +0000 (0:00:00.125) 0:03:11.586 **** 2026-02-18 03:10:56.722574 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:10:56.722584 | orchestrator | 2026-02-18 03:10:56.722592 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-18 03:10:56.722599 | orchestrator | Wednesday 18 February 2026 03:10:08 +0000 (0:00:00.129) 0:03:11.715 **** 2026-02-18 03:10:56.722607 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:10:56.722615 | orchestrator | 2026-02-18 03:10:56.722622 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-18 03:10:56.722639 | orchestrator | Wednesday 18 February 2026 03:10:08 +0000 (0:00:00.139) 0:03:11.854 **** 2026-02-18 03:10:56.722647 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:10:56.722655 | orchestrator | 2026-02-18 03:10:56.722663 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-18 03:10:56.722670 | orchestrator | Wednesday 18 February 2026 03:10:08 +0000 (0:00:00.144) 0:03:11.999 **** 2026-02-18 03:10:56.722678 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 03:10:56.722686 | orchestrator | 2026-02-18 03:10:56.722694 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-18 03:10:56.722702 | orchestrator | Wednesday 18 February 2026 03:10:14 +0000 (0:00:05.471) 0:03:17.470 **** 2026-02-18 03:10:56.722709 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-18 03:10:56.722717 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-18 03:10:56.722740 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-18 03:11:21.520830 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-18 03:11:21.520933 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-18 03:11:21.520944 | orchestrator | 2026-02-18 03:11:21.520953 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-18 03:11:21.520962 | orchestrator | Wednesday 18 February 2026 03:10:56 +0000 (0:00:42.368) 0:03:59.839 **** 2026-02-18 03:11:21.520970 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 03:11:21.520979 | orchestrator | 2026-02-18 03:11:21.520987 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-18 03:11:21.520995 | orchestrator | Wednesday 18 February 2026 03:10:58 +0000 (0:00:01.346) 0:04:01.185 **** 2026-02-18 03:11:21.521003 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 03:11:21.521011 | orchestrator | 2026-02-18 03:11:21.521019 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-18 03:11:21.521026 | orchestrator | Wednesday 18 February 2026 03:10:59 +0000 (0:00:01.808) 0:04:02.993 **** 2026-02-18 03:11:21.521035 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 03:11:21.521042 | orchestrator | 2026-02-18 03:11:21.521050 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-18 03:11:21.521059 | orchestrator | Wednesday 18 February 2026 03:11:01 +0000 (0:00:01.167) 0:04:04.161 **** 2026-02-18 03:11:21.521087 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:11:21.521096 | orchestrator | 2026-02-18 03:11:21.521104 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-18 03:11:21.521112 | orchestrator | Wednesday 18 February 2026 03:11:01 +0000 (0:00:00.146) 0:04:04.308 **** 2026-02-18 03:11:21.521120 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-18 03:11:21.521128 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-18 03:11:21.521136 | orchestrator | 2026-02-18 03:11:21.521144 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-18 03:11:21.521152 | orchestrator | Wednesday 18 February 2026 03:11:03 +0000 (0:00:01.865) 0:04:06.174 **** 2026-02-18 03:11:21.521159 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:11:21.521167 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:11:21.521175 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:11:21.521183 | orchestrator | 2026-02-18 03:11:21.521191 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-18 03:11:21.521198 | orchestrator | Wednesday 18 February 2026 03:11:03 +0000 (0:00:00.349) 0:04:06.523 **** 2026-02-18 03:11:21.521206 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:11:21.521214 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:11:21.521222 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:11:21.521230 | orchestrator | 2026-02-18 03:11:21.521238 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-18 03:11:21.521245 | orchestrator | 2026-02-18 03:11:21.521253 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-18 03:11:21.521261 | orchestrator | Wednesday 18 February 2026 03:11:04 +0000 (0:00:00.882) 0:04:07.406 **** 2026-02-18 03:11:21.521269 | orchestrator | ok: [testbed-manager] 2026-02-18 03:11:21.521276 | orchestrator | 2026-02-18 03:11:21.521285 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-18 03:11:21.521292 | orchestrator | Wednesday 18 February 2026 03:11:04 +0000 (0:00:00.361) 0:04:07.768 **** 2026-02-18 03:11:21.521300 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 03:11:21.521308 | orchestrator | 2026-02-18 03:11:21.521316 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-18 03:11:21.521324 | orchestrator | Wednesday 18 February 2026 03:11:04 +0000 (0:00:00.243) 0:04:08.011 **** 2026-02-18 03:11:21.521331 | orchestrator | changed: [testbed-manager] 2026-02-18 03:11:21.521339 | orchestrator | 2026-02-18 03:11:21.521347 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-18 03:11:21.521358 | orchestrator | 2026-02-18 03:11:21.521373 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-18 03:11:21.521387 | orchestrator | Wednesday 18 February 2026 03:11:10 +0000 (0:00:05.942) 0:04:13.953 **** 2026-02-18 03:11:21.521400 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:11:21.521414 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:11:21.521451 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:11:21.521464 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:11:21.521476 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:11:21.521489 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:11:21.521502 | orchestrator | 2026-02-18 03:11:21.521515 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-18 03:11:21.521530 | orchestrator | Wednesday 18 February 2026 03:11:11 +0000 (0:00:00.962) 0:04:14.915 **** 2026-02-18 03:11:21.521545 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-18 03:11:21.521559 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-18 03:11:21.521573 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-18 03:11:21.521583 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-18 03:11:21.521601 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-18 03:11:21.521610 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-18 03:11:21.521619 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-18 03:11:21.521628 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-18 03:11:21.521637 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-18 03:11:21.521660 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-18 03:11:21.521670 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-18 03:11:21.521680 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-18 03:11:21.521688 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-18 03:11:21.521697 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-18 03:11:21.521707 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-18 03:11:21.521731 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-18 03:11:21.521741 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-18 03:11:21.521750 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-18 03:11:21.521758 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-18 03:11:21.521768 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-18 03:11:21.521777 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-18 03:11:21.521784 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-18 03:11:21.521792 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-18 03:11:21.521800 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-18 03:11:21.521808 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-18 03:11:21.521815 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-18 03:11:21.521823 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-18 03:11:21.521831 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-18 03:11:21.521839 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-18 03:11:21.521846 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-18 03:11:21.521854 | orchestrator | 2026-02-18 03:11:21.521862 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-18 03:11:21.521870 | orchestrator | Wednesday 18 February 2026 03:11:20 +0000 (0:00:08.414) 0:04:23.329 **** 2026-02-18 03:11:21.521877 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:11:21.521885 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:11:21.521893 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:11:21.521901 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:11:21.521908 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:11:21.521916 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:11:21.521924 | orchestrator | 2026-02-18 03:11:21.521932 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-18 03:11:21.521939 | orchestrator | Wednesday 18 February 2026 03:11:20 +0000 (0:00:00.582) 0:04:23.912 **** 2026-02-18 03:11:21.521947 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:11:21.521961 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:11:21.521969 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:11:21.521977 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:11:21.521984 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:11:21.521992 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:11:21.522000 | orchestrator | 2026-02-18 03:11:21.522007 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:11:21.522071 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:11:21.522084 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-18 03:11:21.522093 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-18 03:11:21.522101 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-18 03:11:21.522108 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 03:11:21.522116 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 03:11:21.522124 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 03:11:21.522132 | orchestrator | 2026-02-18 03:11:21.522140 | orchestrator | 2026-02-18 03:11:21.522148 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:11:21.522155 | orchestrator | Wednesday 18 February 2026 03:11:21 +0000 (0:00:00.710) 0:04:24.623 **** 2026-02-18 03:11:21.522170 | orchestrator | =============================================================================== 2026-02-18 03:11:21.946409 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.97s 2026-02-18 03:11:21.946611 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.37s 2026-02-18 03:11:21.946635 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.19s 2026-02-18 03:11:21.946647 | orchestrator | kubectl : Install required packages ------------------------------------ 13.23s 2026-02-18 03:11:21.946658 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.38s 2026-02-18 03:11:21.946669 | orchestrator | Manage labels ----------------------------------------------------------- 8.41s 2026-02-18 03:11:21.946680 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.95s 2026-02-18 03:11:21.946691 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.94s 2026-02-18 03:11:21.946702 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.47s 2026-02-18 03:11:21.946712 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.21s 2026-02-18 03:11:21.946724 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2026-02-18 03:11:21.946736 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.82s 2026-02-18 03:11:21.946747 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.28s 2026-02-18 03:11:21.946757 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2026-02-18 03:11:21.946768 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.89s 2026-02-18 03:11:21.946780 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.87s 2026-02-18 03:11:21.946791 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.81s 2026-02-18 03:11:21.946832 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.68s 2026-02-18 03:11:21.946843 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.67s 2026-02-18 03:11:21.946854 | orchestrator | k3s_agent : Check if system is PXE-booted ------------------------------- 1.47s 2026-02-18 03:11:22.356063 | orchestrator | + osism apply copy-kubeconfig 2026-02-18 03:11:34.564426 | orchestrator | 2026-02-18 03:11:34 | INFO  | Task 95b1465e-1ec2-48e7-a9ff-73576afc753c (copy-kubeconfig) was prepared for execution. 2026-02-18 03:11:34.564569 | orchestrator | 2026-02-18 03:11:34 | INFO  | It takes a moment until task 95b1465e-1ec2-48e7-a9ff-73576afc753c (copy-kubeconfig) has been started and output is visible here. 2026-02-18 03:11:41.899258 | orchestrator | 2026-02-18 03:11:41.899349 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-18 03:11:41.899357 | orchestrator | 2026-02-18 03:11:41.899363 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-18 03:11:41.899368 | orchestrator | Wednesday 18 February 2026 03:11:38 +0000 (0:00:00.171) 0:00:00.171 **** 2026-02-18 03:11:41.899374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-18 03:11:41.899380 | orchestrator | 2026-02-18 03:11:41.899385 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-18 03:11:41.899406 | orchestrator | Wednesday 18 February 2026 03:11:39 +0000 (0:00:00.802) 0:00:00.973 **** 2026-02-18 03:11:41.899412 | orchestrator | changed: [testbed-manager] 2026-02-18 03:11:41.899417 | orchestrator | 2026-02-18 03:11:41.899423 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-18 03:11:41.899428 | orchestrator | Wednesday 18 February 2026 03:11:41 +0000 (0:00:01.340) 0:00:02.314 **** 2026-02-18 03:11:41.899436 | orchestrator | changed: [testbed-manager] 2026-02-18 03:11:41.899508 | orchestrator | 2026-02-18 03:11:41.899520 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:11:41.899526 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:11:41.899532 | orchestrator | 2026-02-18 03:11:41.899537 | orchestrator | 2026-02-18 03:11:41.899542 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:11:41.899547 | orchestrator | Wednesday 18 February 2026 03:11:41 +0000 (0:00:00.496) 0:00:02.810 **** 2026-02-18 03:11:41.899551 | orchestrator | =============================================================================== 2026-02-18 03:11:41.899556 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.34s 2026-02-18 03:11:41.899561 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-02-18 03:11:41.899566 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-02-18 03:11:42.286389 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-18 03:11:54.540231 | orchestrator | 2026-02-18 03:11:54 | INFO  | Task b1e5015e-0b9c-4c54-8666-7cac329c9e0e (openstackclient) was prepared for execution. 2026-02-18 03:11:54.540377 | orchestrator | 2026-02-18 03:11:54 | INFO  | It takes a moment until task b1e5015e-0b9c-4c54-8666-7cac329c9e0e (openstackclient) has been started and output is visible here. 2026-02-18 03:12:43.703817 | orchestrator | 2026-02-18 03:12:43.703936 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-18 03:12:43.703952 | orchestrator | 2026-02-18 03:12:43.703963 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-18 03:12:43.703975 | orchestrator | Wednesday 18 February 2026 03:11:59 +0000 (0:00:00.267) 0:00:00.267 **** 2026-02-18 03:12:43.703987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-18 03:12:43.703999 | orchestrator | 2026-02-18 03:12:43.704038 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-18 03:12:43.704050 | orchestrator | Wednesday 18 February 2026 03:11:59 +0000 (0:00:00.237) 0:00:00.505 **** 2026-02-18 03:12:43.704061 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-18 03:12:43.704073 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-18 03:12:43.704083 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-18 03:12:43.704094 | orchestrator | 2026-02-18 03:12:43.704105 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-18 03:12:43.704116 | orchestrator | Wednesday 18 February 2026 03:12:00 +0000 (0:00:01.279) 0:00:01.784 **** 2026-02-18 03:12:43.704127 | orchestrator | changed: [testbed-manager] 2026-02-18 03:12:43.704137 | orchestrator | 2026-02-18 03:12:43.704148 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-18 03:12:43.704158 | orchestrator | Wednesday 18 February 2026 03:12:02 +0000 (0:00:01.568) 0:00:03.352 **** 2026-02-18 03:12:43.704169 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-18 03:12:43.704180 | orchestrator | ok: [testbed-manager] 2026-02-18 03:12:43.704192 | orchestrator | 2026-02-18 03:12:43.704202 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-18 03:12:43.704213 | orchestrator | Wednesday 18 February 2026 03:12:38 +0000 (0:00:36.074) 0:00:39.427 **** 2026-02-18 03:12:43.704223 | orchestrator | changed: [testbed-manager] 2026-02-18 03:12:43.704234 | orchestrator | 2026-02-18 03:12:43.704244 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-18 03:12:43.704255 | orchestrator | Wednesday 18 February 2026 03:12:39 +0000 (0:00:00.969) 0:00:40.397 **** 2026-02-18 03:12:43.704265 | orchestrator | ok: [testbed-manager] 2026-02-18 03:12:43.704276 | orchestrator | 2026-02-18 03:12:43.704286 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-18 03:12:43.704297 | orchestrator | Wednesday 18 February 2026 03:12:39 +0000 (0:00:00.617) 0:00:41.014 **** 2026-02-18 03:12:43.704307 | orchestrator | changed: [testbed-manager] 2026-02-18 03:12:43.704318 | orchestrator | 2026-02-18 03:12:43.704329 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-18 03:12:43.704341 | orchestrator | Wednesday 18 February 2026 03:12:41 +0000 (0:00:01.461) 0:00:42.476 **** 2026-02-18 03:12:43.704354 | orchestrator | changed: [testbed-manager] 2026-02-18 03:12:43.704366 | orchestrator | 2026-02-18 03:12:43.704378 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-18 03:12:43.704390 | orchestrator | Wednesday 18 February 2026 03:12:42 +0000 (0:00:00.744) 0:00:43.221 **** 2026-02-18 03:12:43.704402 | orchestrator | changed: [testbed-manager] 2026-02-18 03:12:43.704414 | orchestrator | 2026-02-18 03:12:43.704426 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-18 03:12:43.704438 | orchestrator | Wednesday 18 February 2026 03:12:42 +0000 (0:00:00.628) 0:00:43.849 **** 2026-02-18 03:12:43.704450 | orchestrator | ok: [testbed-manager] 2026-02-18 03:12:43.704462 | orchestrator | 2026-02-18 03:12:43.704474 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:12:43.704575 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:12:43.704590 | orchestrator | 2026-02-18 03:12:43.704602 | orchestrator | 2026-02-18 03:12:43.704615 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:12:43.704627 | orchestrator | Wednesday 18 February 2026 03:12:43 +0000 (0:00:00.421) 0:00:44.271 **** 2026-02-18 03:12:43.704640 | orchestrator | =============================================================================== 2026-02-18 03:12:43.704653 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.07s 2026-02-18 03:12:43.704665 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.57s 2026-02-18 03:12:43.704690 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.46s 2026-02-18 03:12:43.704703 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.28s 2026-02-18 03:12:43.704715 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.97s 2026-02-18 03:12:43.704745 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.74s 2026-02-18 03:12:43.704756 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.63s 2026-02-18 03:12:43.704767 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.62s 2026-02-18 03:12:43.704778 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2026-02-18 03:12:43.704788 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2026-02-18 03:12:46.117798 | orchestrator | 2026-02-18 03:12:46 | INFO  | Task f5be5ffb-6e92-4b26-998d-5556c5a584f7 (common) was prepared for execution. 2026-02-18 03:12:46.117899 | orchestrator | 2026-02-18 03:12:46 | INFO  | It takes a moment until task f5be5ffb-6e92-4b26-998d-5556c5a584f7 (common) has been started and output is visible here. 2026-02-18 03:12:58.856946 | orchestrator | 2026-02-18 03:12:58.857056 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-18 03:12:58.857067 | orchestrator | 2026-02-18 03:12:58.857073 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-18 03:12:58.857079 | orchestrator | Wednesday 18 February 2026 03:12:50 +0000 (0:00:00.298) 0:00:00.298 **** 2026-02-18 03:12:58.857085 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:12:58.857091 | orchestrator | 2026-02-18 03:12:58.857097 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-18 03:12:58.857103 | orchestrator | Wednesday 18 February 2026 03:12:51 +0000 (0:00:01.355) 0:00:01.654 **** 2026-02-18 03:12:58.857108 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 03:12:58.857114 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 03:12:58.857120 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 03:12:58.857125 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 03:12:58.857131 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 03:12:58.857136 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 03:12:58.857141 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 03:12:58.857147 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 03:12:58.857167 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 03:12:58.857173 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 03:12:58.857178 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 03:12:58.857185 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 03:12:58.857190 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 03:12:58.857195 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 03:12:58.857201 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 03:12:58.857206 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 03:12:58.857212 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 03:12:58.857232 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 03:12:58.857238 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 03:12:58.857247 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 03:12:58.857256 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 03:12:58.857264 | orchestrator | 2026-02-18 03:12:58.857273 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-18 03:12:58.857282 | orchestrator | Wednesday 18 February 2026 03:12:54 +0000 (0:00:02.643) 0:00:04.297 **** 2026-02-18 03:12:58.857290 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:12:58.857301 | orchestrator | 2026-02-18 03:12:58.857311 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-18 03:12:58.857325 | orchestrator | Wednesday 18 February 2026 03:12:55 +0000 (0:00:01.431) 0:00:05.728 **** 2026-02-18 03:12:58.857337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:12:58.857349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:12:58.857372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:12:58.857379 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:12:58.857385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:12:58.857390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:12:58.857403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:12:58.857408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:58.857414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:58.857424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862406 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862575 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:12:59.862682 | orchestrator | 2026-02-18 03:12:59.862692 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-18 03:12:59.862702 | orchestrator | Wednesday 18 February 2026 03:12:59 +0000 (0:00:03.539) 0:00:09.268 **** 2026-02-18 03:12:59.862713 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:12:59.862723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:12:59.862732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:12:59.862741 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:12:59.862751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:12:59.862771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.548978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549091 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:13:00.549148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:00.549162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549181 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:13:00.549190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:00.549211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549229 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:13:00.549254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:00.549271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549289 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:13:00.549298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:00.549307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:00.549326 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:13:00.549336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:00.549350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.463955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464095 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:13:01.464119 | orchestrator | 2026-02-18 03:13:01.464137 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-18 03:13:01.464154 | orchestrator | Wednesday 18 February 2026 03:13:00 +0000 (0:00:01.006) 0:00:10.275 **** 2026-02-18 03:13:01.464170 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:01.464188 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:01.464265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464306 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:13:01.464324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464339 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:13:01.464385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:01.464404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464435 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:13:01.464450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:01.464465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:01.464544 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:13:01.464562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:01.464600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:06.737149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:06.737256 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:13:06.737274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:06.737289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:06.737306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:06.737326 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:13:06.737346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 03:13:06.737400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:06.737415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:06.737426 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:13:06.737437 | orchestrator | 2026-02-18 03:13:06.737449 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-18 03:13:06.737461 | orchestrator | Wednesday 18 February 2026 03:13:02 +0000 (0:00:01.919) 0:00:12.195 **** 2026-02-18 03:13:06.737472 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:13:06.737483 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:13:06.737547 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:13:06.737559 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:13:06.737587 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:13:06.737599 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:13:06.737610 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:13:06.737621 | orchestrator | 2026-02-18 03:13:06.737632 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-18 03:13:06.737642 | orchestrator | Wednesday 18 February 2026 03:13:03 +0000 (0:00:00.709) 0:00:12.904 **** 2026-02-18 03:13:06.737653 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:13:06.737670 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:13:06.737688 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:13:06.737706 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:13:06.737725 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:13:06.737743 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:13:06.737761 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:13:06.737779 | orchestrator | 2026-02-18 03:13:06.737796 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-18 03:13:06.737814 | orchestrator | Wednesday 18 February 2026 03:13:04 +0000 (0:00:00.885) 0:00:13.790 **** 2026-02-18 03:13:06.737834 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:06.737875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:06.737911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:06.737938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:06.737960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:06.737982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:06.738105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:09.552389 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552631 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:09.552686 | orchestrator | 2026-02-18 03:13:09.552694 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-18 03:13:09.552701 | orchestrator | Wednesday 18 February 2026 03:13:07 +0000 (0:00:03.503) 0:00:17.293 **** 2026-02-18 03:13:09.552707 | orchestrator | [WARNING]: Skipped 2026-02-18 03:13:09.552714 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-18 03:13:09.552722 | orchestrator | to this access issue: 2026-02-18 03:13:09.552728 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-18 03:13:09.552735 | orchestrator | directory 2026-02-18 03:13:09.552741 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 03:13:09.552748 | orchestrator | 2026-02-18 03:13:09.552754 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-18 03:13:09.552760 | orchestrator | Wednesday 18 February 2026 03:13:08 +0000 (0:00:00.981) 0:00:18.274 **** 2026-02-18 03:13:09.552766 | orchestrator | [WARNING]: Skipped 2026-02-18 03:13:09.552776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-18 03:13:20.224060 | orchestrator | to this access issue: 2026-02-18 03:13:20.224177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-18 03:13:20.224195 | orchestrator | directory 2026-02-18 03:13:20.224208 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 03:13:20.224220 | orchestrator | 2026-02-18 03:13:20.224232 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-18 03:13:20.224243 | orchestrator | Wednesday 18 February 2026 03:13:09 +0000 (0:00:01.315) 0:00:19.590 **** 2026-02-18 03:13:20.224277 | orchestrator | [WARNING]: Skipped 2026-02-18 03:13:20.224290 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-18 03:13:20.224300 | orchestrator | to this access issue: 2026-02-18 03:13:20.224312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-18 03:13:20.224322 | orchestrator | directory 2026-02-18 03:13:20.224334 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 03:13:20.224345 | orchestrator | 2026-02-18 03:13:20.224356 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-18 03:13:20.224367 | orchestrator | Wednesday 18 February 2026 03:13:10 +0000 (0:00:00.894) 0:00:20.485 **** 2026-02-18 03:13:20.224377 | orchestrator | [WARNING]: Skipped 2026-02-18 03:13:20.224387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-18 03:13:20.224397 | orchestrator | to this access issue: 2026-02-18 03:13:20.224408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-18 03:13:20.224418 | orchestrator | directory 2026-02-18 03:13:20.224430 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 03:13:20.224441 | orchestrator | 2026-02-18 03:13:20.224452 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-18 03:13:20.224463 | orchestrator | Wednesday 18 February 2026 03:13:11 +0000 (0:00:00.855) 0:00:21.341 **** 2026-02-18 03:13:20.224474 | orchestrator | changed: [testbed-manager] 2026-02-18 03:13:20.224484 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:13:20.224581 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:13:20.224595 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:13:20.224607 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:13:20.224618 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:13:20.224650 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:13:20.224662 | orchestrator | 2026-02-18 03:13:20.224673 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-18 03:13:20.224685 | orchestrator | Wednesday 18 February 2026 03:13:14 +0000 (0:00:02.730) 0:00:24.072 **** 2026-02-18 03:13:20.224696 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 03:13:20.224711 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 03:13:20.224722 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 03:13:20.224733 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 03:13:20.224743 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 03:13:20.224753 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 03:13:20.224768 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 03:13:20.224779 | orchestrator | 2026-02-18 03:13:20.224790 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-18 03:13:20.224800 | orchestrator | Wednesday 18 February 2026 03:13:16 +0000 (0:00:02.240) 0:00:26.312 **** 2026-02-18 03:13:20.224811 | orchestrator | changed: [testbed-manager] 2026-02-18 03:13:20.224823 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:13:20.224834 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:13:20.224845 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:13:20.224855 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:13:20.224866 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:13:20.224876 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:13:20.224887 | orchestrator | 2026-02-18 03:13:20.224898 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-18 03:13:20.224920 | orchestrator | Wednesday 18 February 2026 03:13:18 +0000 (0:00:01.997) 0:00:28.309 **** 2026-02-18 03:13:20.224937 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:20.224972 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:20.224986 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:20.224998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:20.225009 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:20.225021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:20.225037 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:20.225055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:20.225074 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:20.225096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.144659 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.144772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:26.144788 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.144820 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.144898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:26.144943 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.144975 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.145033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:13:26.145055 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.145075 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.145096 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.145118 | orchestrator | 2026-02-18 03:13:26.145141 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-18 03:13:26.145163 | orchestrator | Wednesday 18 February 2026 03:13:20 +0000 (0:00:01.760) 0:00:30.070 **** 2026-02-18 03:13:26.145181 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 03:13:26.145194 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 03:13:26.145219 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 03:13:26.145231 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 03:13:26.145243 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 03:13:26.145253 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 03:13:26.145264 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 03:13:26.145275 | orchestrator | 2026-02-18 03:13:26.145286 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-18 03:13:26.145297 | orchestrator | Wednesday 18 February 2026 03:13:22 +0000 (0:00:01.957) 0:00:32.027 **** 2026-02-18 03:13:26.145308 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 03:13:26.145319 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 03:13:26.145330 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 03:13:26.145358 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 03:13:26.145370 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 03:13:26.145380 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 03:13:26.145391 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 03:13:26.145402 | orchestrator | 2026-02-18 03:13:26.145412 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-18 03:13:26.145423 | orchestrator | Wednesday 18 February 2026 03:13:24 +0000 (0:00:01.757) 0:00:33.785 **** 2026-02-18 03:13:26.145434 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.145456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.720870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.720972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.721008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.721028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.721036 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721043 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 03:13:26.721050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721113 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:13:26.721134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:14:54.029963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:14:54.030092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:14:54.030100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:14:54.030114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:14:54.030118 | orchestrator | 2026-02-18 03:14:54.030124 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-18 03:14:54.030129 | orchestrator | Wednesday 18 February 2026 03:13:26 +0000 (0:00:02.664) 0:00:36.449 **** 2026-02-18 03:14:54.030133 | orchestrator | changed: [testbed-manager] 2026-02-18 03:14:54.030138 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:14:54.030142 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:14:54.030145 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:14:54.030149 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:14:54.030153 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:14:54.030157 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:14:54.030160 | orchestrator | 2026-02-18 03:14:54.030165 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-18 03:14:54.030171 | orchestrator | Wednesday 18 February 2026 03:13:28 +0000 (0:00:01.412) 0:00:37.862 **** 2026-02-18 03:14:54.030178 | orchestrator | changed: [testbed-manager] 2026-02-18 03:14:54.030184 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:14:54.030194 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:14:54.030202 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:14:54.030207 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:14:54.030213 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:14:54.030220 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:14:54.030226 | orchestrator | 2026-02-18 03:14:54.030233 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 03:14:54.030239 | orchestrator | Wednesday 18 February 2026 03:13:29 +0000 (0:00:01.317) 0:00:39.180 **** 2026-02-18 03:14:54.030245 | orchestrator | 2026-02-18 03:14:54.030252 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 03:14:54.030258 | orchestrator | Wednesday 18 February 2026 03:13:29 +0000 (0:00:00.066) 0:00:39.246 **** 2026-02-18 03:14:54.030264 | orchestrator | 2026-02-18 03:14:54.030270 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 03:14:54.030277 | orchestrator | Wednesday 18 February 2026 03:13:29 +0000 (0:00:00.065) 0:00:39.312 **** 2026-02-18 03:14:54.030285 | orchestrator | 2026-02-18 03:14:54.030292 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 03:14:54.030300 | orchestrator | Wednesday 18 February 2026 03:13:29 +0000 (0:00:00.065) 0:00:39.378 **** 2026-02-18 03:14:54.030304 | orchestrator | 2026-02-18 03:14:54.030308 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 03:14:54.030317 | orchestrator | Wednesday 18 February 2026 03:13:29 +0000 (0:00:00.231) 0:00:39.610 **** 2026-02-18 03:14:54.030321 | orchestrator | 2026-02-18 03:14:54.030325 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 03:14:54.030329 | orchestrator | Wednesday 18 February 2026 03:13:29 +0000 (0:00:00.060) 0:00:39.670 **** 2026-02-18 03:14:54.030332 | orchestrator | 2026-02-18 03:14:54.030336 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 03:14:54.030340 | orchestrator | Wednesday 18 February 2026 03:13:29 +0000 (0:00:00.066) 0:00:39.736 **** 2026-02-18 03:14:54.030344 | orchestrator | 2026-02-18 03:14:54.030347 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-18 03:14:54.030351 | orchestrator | Wednesday 18 February 2026 03:13:30 +0000 (0:00:00.123) 0:00:39.860 **** 2026-02-18 03:14:54.030355 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:14:54.030358 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:14:54.030362 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:14:54.030366 | orchestrator | changed: [testbed-manager] 2026-02-18 03:14:54.030370 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:14:54.030383 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:14:54.030387 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:14:54.030391 | orchestrator | 2026-02-18 03:14:54.030394 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-18 03:14:54.030398 | orchestrator | Wednesday 18 February 2026 03:14:08 +0000 (0:00:38.222) 0:01:18.083 **** 2026-02-18 03:14:54.030402 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:14:54.030405 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:14:54.030409 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:14:54.030413 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:14:54.030417 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:14:54.030420 | orchestrator | changed: [testbed-manager] 2026-02-18 03:14:54.030424 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:14:54.030428 | orchestrator | 2026-02-18 03:14:54.030431 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-18 03:14:54.030435 | orchestrator | Wednesday 18 February 2026 03:14:42 +0000 (0:00:34.430) 0:01:52.513 **** 2026-02-18 03:14:54.030439 | orchestrator | ok: [testbed-manager] 2026-02-18 03:14:54.030444 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:14:54.030447 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:14:54.030451 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:14:54.030455 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:14:54.030458 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:14:54.030462 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:14:54.030466 | orchestrator | 2026-02-18 03:14:54.030469 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-18 03:14:54.030473 | orchestrator | Wednesday 18 February 2026 03:14:44 +0000 (0:00:02.048) 0:01:54.562 **** 2026-02-18 03:14:54.030477 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:14:54.030480 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:14:54.030484 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:14:54.030488 | orchestrator | changed: [testbed-manager] 2026-02-18 03:14:54.030491 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:14:54.030495 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:14:54.030499 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:14:54.030502 | orchestrator | 2026-02-18 03:14:54.030506 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:14:54.030511 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 03:14:54.030517 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 03:14:54.030527 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 03:14:54.030535 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 03:14:54.030540 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 03:14:54.030568 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 03:14:54.030573 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 03:14:54.030577 | orchestrator | 2026-02-18 03:14:54.030581 | orchestrator | 2026-02-18 03:14:54.030586 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:14:54.030590 | orchestrator | Wednesday 18 February 2026 03:14:53 +0000 (0:00:09.162) 0:02:03.724 **** 2026-02-18 03:14:54.030595 | orchestrator | =============================================================================== 2026-02-18 03:14:54.030599 | orchestrator | common : Restart fluentd container ------------------------------------- 38.22s 2026-02-18 03:14:54.030603 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.43s 2026-02-18 03:14:54.030608 | orchestrator | common : Restart cron container ----------------------------------------- 9.16s 2026-02-18 03:14:54.030612 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.54s 2026-02-18 03:14:54.030617 | orchestrator | common : Copying over config.json files for services -------------------- 3.50s 2026-02-18 03:14:54.030622 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.73s 2026-02-18 03:14:54.030626 | orchestrator | common : Check common containers ---------------------------------------- 2.66s 2026-02-18 03:14:54.030630 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.64s 2026-02-18 03:14:54.030635 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.24s 2026-02-18 03:14:54.030639 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.05s 2026-02-18 03:14:54.030643 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.00s 2026-02-18 03:14:54.030648 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.96s 2026-02-18 03:14:54.030652 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.92s 2026-02-18 03:14:54.030657 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.76s 2026-02-18 03:14:54.030661 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.76s 2026-02-18 03:14:54.030665 | orchestrator | common : include_tasks -------------------------------------------------- 1.43s 2026-02-18 03:14:54.030672 | orchestrator | common : Creating log volume -------------------------------------------- 1.41s 2026-02-18 03:14:54.497704 | orchestrator | common : include_tasks -------------------------------------------------- 1.36s 2026-02-18 03:14:54.497842 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.32s 2026-02-18 03:14:54.497858 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.32s 2026-02-18 03:14:56.931103 | orchestrator | 2026-02-18 03:14:56 | INFO  | Task 9e0421c1-034e-4c19-9453-ea1063d7a0aa (loadbalancer) was prepared for execution. 2026-02-18 03:14:56.931175 | orchestrator | 2026-02-18 03:14:56 | INFO  | It takes a moment until task 9e0421c1-034e-4c19-9453-ea1063d7a0aa (loadbalancer) has been started and output is visible here. 2026-02-18 03:15:12.545882 | orchestrator | 2026-02-18 03:15:12.546086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:15:12.546119 | orchestrator | 2026-02-18 03:15:12.546131 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:15:12.546175 | orchestrator | Wednesday 18 February 2026 03:15:01 +0000 (0:00:00.275) 0:00:00.276 **** 2026-02-18 03:15:12.546194 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:15:12.546215 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:15:12.546234 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:15:12.546262 | orchestrator | 2026-02-18 03:15:12.546281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:15:12.546298 | orchestrator | Wednesday 18 February 2026 03:15:01 +0000 (0:00:00.322) 0:00:00.598 **** 2026-02-18 03:15:12.546333 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-18 03:15:12.546351 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-18 03:15:12.546369 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-18 03:15:12.546385 | orchestrator | 2026-02-18 03:15:12.546404 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-18 03:15:12.546422 | orchestrator | 2026-02-18 03:15:12.546441 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-18 03:15:12.546459 | orchestrator | Wednesday 18 February 2026 03:15:02 +0000 (0:00:00.466) 0:00:01.065 **** 2026-02-18 03:15:12.546497 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:15:12.546517 | orchestrator | 2026-02-18 03:15:12.546580 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-18 03:15:12.546603 | orchestrator | Wednesday 18 February 2026 03:15:02 +0000 (0:00:00.613) 0:00:01.678 **** 2026-02-18 03:15:12.546617 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:15:12.546630 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:15:12.546642 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:15:12.546652 | orchestrator | 2026-02-18 03:15:12.546663 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-18 03:15:12.546674 | orchestrator | Wednesday 18 February 2026 03:15:03 +0000 (0:00:00.611) 0:00:02.289 **** 2026-02-18 03:15:12.546685 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:15:12.546695 | orchestrator | 2026-02-18 03:15:12.546706 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-18 03:15:12.546716 | orchestrator | Wednesday 18 February 2026 03:15:04 +0000 (0:00:00.832) 0:00:03.122 **** 2026-02-18 03:15:12.546727 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:15:12.546738 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:15:12.546748 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:15:12.546759 | orchestrator | 2026-02-18 03:15:12.546770 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-18 03:15:12.546781 | orchestrator | Wednesday 18 February 2026 03:15:04 +0000 (0:00:00.648) 0:00:03.770 **** 2026-02-18 03:15:12.546792 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-18 03:15:12.546802 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-18 03:15:12.546813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-18 03:15:12.546823 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-18 03:15:12.546834 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-18 03:15:12.546846 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-18 03:15:12.546856 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-18 03:15:12.546867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-18 03:15:12.546878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-18 03:15:12.546888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-18 03:15:12.546915 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-18 03:15:12.546925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-18 03:15:12.546936 | orchestrator | 2026-02-18 03:15:12.546947 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-18 03:15:12.546957 | orchestrator | Wednesday 18 February 2026 03:15:08 +0000 (0:00:03.151) 0:00:06.922 **** 2026-02-18 03:15:12.546968 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-18 03:15:12.546979 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-18 03:15:12.546990 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-18 03:15:12.547001 | orchestrator | 2026-02-18 03:15:12.547019 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-18 03:15:12.547048 | orchestrator | Wednesday 18 February 2026 03:15:08 +0000 (0:00:00.688) 0:00:07.610 **** 2026-02-18 03:15:12.547067 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-18 03:15:12.547087 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-18 03:15:12.547106 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-18 03:15:12.547124 | orchestrator | 2026-02-18 03:15:12.547142 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-18 03:15:12.547156 | orchestrator | Wednesday 18 February 2026 03:15:10 +0000 (0:00:01.288) 0:00:08.899 **** 2026-02-18 03:15:12.547166 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-18 03:15:12.547177 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:12.547210 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-18 03:15:12.547222 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:12.547232 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-18 03:15:12.547243 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:12.547253 | orchestrator | 2026-02-18 03:15:12.547264 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-18 03:15:12.547274 | orchestrator | Wednesday 18 February 2026 03:15:10 +0000 (0:00:00.584) 0:00:09.484 **** 2026-02-18 03:15:12.547288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:12.547316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:12.547328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:12.547348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:12.547360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:12.547380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:17.901472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:17.901663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:17.901686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:17.901699 | orchestrator | 2026-02-18 03:15:17.901713 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-18 03:15:17.901726 | orchestrator | Wednesday 18 February 2026 03:15:12 +0000 (0:00:01.878) 0:00:11.362 **** 2026-02-18 03:15:17.901761 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:15:17.901789 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:15:17.901812 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:15:17.901824 | orchestrator | 2026-02-18 03:15:17.901835 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-18 03:15:17.901846 | orchestrator | Wednesday 18 February 2026 03:15:13 +0000 (0:00:00.964) 0:00:12.327 **** 2026-02-18 03:15:17.901857 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-18 03:15:17.901868 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-18 03:15:17.901879 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-18 03:15:17.901889 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-18 03:15:17.901900 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-18 03:15:17.901910 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-18 03:15:17.901921 | orchestrator | 2026-02-18 03:15:17.901931 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-18 03:15:17.901942 | orchestrator | Wednesday 18 February 2026 03:15:14 +0000 (0:00:01.474) 0:00:13.801 **** 2026-02-18 03:15:17.901953 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:15:17.901963 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:15:17.901974 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:15:17.901984 | orchestrator | 2026-02-18 03:15:17.901995 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-18 03:15:17.902005 | orchestrator | Wednesday 18 February 2026 03:15:15 +0000 (0:00:00.907) 0:00:14.708 **** 2026-02-18 03:15:17.902067 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:15:17.902080 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:15:17.902090 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:15:17.902101 | orchestrator | 2026-02-18 03:15:17.902112 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-18 03:15:17.902122 | orchestrator | Wednesday 18 February 2026 03:15:17 +0000 (0:00:01.384) 0:00:16.093 **** 2026-02-18 03:15:17.902134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:17.902176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:17.902189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:17.902203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 03:15:17.902226 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:17.902237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:17.902287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:17.902300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:17.902311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 03:15:17.902323 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:17.902342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:20.822859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:20.822983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:20.822998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 03:15:20.823011 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:20.823023 | orchestrator | 2026-02-18 03:15:20.823034 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-18 03:15:20.823044 | orchestrator | Wednesday 18 February 2026 03:15:17 +0000 (0:00:00.632) 0:00:16.726 **** 2026-02-18 03:15:20.823055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:20.823066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:20.823076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:20.823125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:20.823144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:20.823162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 03:15:20.823180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:20.823198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:20.823216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 03:15:20.823264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:29.286690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:29.286843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32', '__omit_place_holder__11db9e00a734edb2a8f2058a3cfba3d3fbf36a32'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 03:15:29.286873 | orchestrator | 2026-02-18 03:15:29.286889 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-18 03:15:29.286902 | orchestrator | Wednesday 18 February 2026 03:15:20 +0000 (0:00:02.916) 0:00:19.642 **** 2026-02-18 03:15:29.286914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:29.286927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:29.286938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:29.286976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:29.287025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:29.287039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:29.287051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:29.287062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:29.287074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:29.287085 | orchestrator | 2026-02-18 03:15:29.287096 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-18 03:15:29.287107 | orchestrator | Wednesday 18 February 2026 03:15:23 +0000 (0:00:03.131) 0:00:22.773 **** 2026-02-18 03:15:29.287129 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-18 03:15:29.287142 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-18 03:15:29.287155 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-18 03:15:29.287167 | orchestrator | 2026-02-18 03:15:29.287180 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-18 03:15:29.287192 | orchestrator | Wednesday 18 February 2026 03:15:25 +0000 (0:00:01.896) 0:00:24.669 **** 2026-02-18 03:15:29.287205 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-18 03:15:29.287217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-18 03:15:29.287229 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-18 03:15:29.287241 | orchestrator | 2026-02-18 03:15:29.287253 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-18 03:15:29.287266 | orchestrator | Wednesday 18 February 2026 03:15:28 +0000 (0:00:02.877) 0:00:27.547 **** 2026-02-18 03:15:29.287279 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:29.287292 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:29.287304 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:29.287317 | orchestrator | 2026-02-18 03:15:29.287338 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-18 03:15:41.118234 | orchestrator | Wednesday 18 February 2026 03:15:29 +0000 (0:00:00.565) 0:00:28.112 **** 2026-02-18 03:15:41.118321 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-18 03:15:41.118341 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-18 03:15:41.118348 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-18 03:15:41.118355 | orchestrator | 2026-02-18 03:15:41.118362 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-18 03:15:41.118368 | orchestrator | Wednesday 18 February 2026 03:15:31 +0000 (0:00:02.106) 0:00:30.219 **** 2026-02-18 03:15:41.118375 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-18 03:15:41.118382 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-18 03:15:41.118388 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-18 03:15:41.118394 | orchestrator | 2026-02-18 03:15:41.118400 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-18 03:15:41.118407 | orchestrator | Wednesday 18 February 2026 03:15:33 +0000 (0:00:02.185) 0:00:32.404 **** 2026-02-18 03:15:41.118414 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-18 03:15:41.118420 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-18 03:15:41.118426 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-18 03:15:41.118433 | orchestrator | 2026-02-18 03:15:41.118449 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-18 03:15:41.118456 | orchestrator | Wednesday 18 February 2026 03:15:35 +0000 (0:00:01.462) 0:00:33.867 **** 2026-02-18 03:15:41.118463 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-18 03:15:41.118469 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-18 03:15:41.118475 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-18 03:15:41.118481 | orchestrator | 2026-02-18 03:15:41.118503 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-18 03:15:41.118510 | orchestrator | Wednesday 18 February 2026 03:15:36 +0000 (0:00:01.550) 0:00:35.418 **** 2026-02-18 03:15:41.118516 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:15:41.118522 | orchestrator | 2026-02-18 03:15:41.118528 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-18 03:15:41.118534 | orchestrator | Wednesday 18 February 2026 03:15:37 +0000 (0:00:00.552) 0:00:35.971 **** 2026-02-18 03:15:41.118542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:41.118552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:41.118592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:41.118623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:41.118635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:41.118645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:41.118665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:41.118672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:41.118678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:41.118685 | orchestrator | 2026-02-18 03:15:41.118691 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-18 03:15:41.118697 | orchestrator | Wednesday 18 February 2026 03:15:40 +0000 (0:00:03.297) 0:00:39.268 **** 2026-02-18 03:15:41.118713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:41.952110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:41.952237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:41.952278 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:41.952293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:41.952304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:41.952315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:41.952325 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:41.952335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:41.952379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:41.952392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:41.952410 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:41.952420 | orchestrator | 2026-02-18 03:15:41.952431 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-18 03:15:41.952442 | orchestrator | Wednesday 18 February 2026 03:15:41 +0000 (0:00:00.678) 0:00:39.947 **** 2026-02-18 03:15:41.952454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:41.952464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:41.952474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:41.952484 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:41.952494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:41.952516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:42.902791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:42.902945 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:42.902972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:42.902993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:42.903012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:42.903030 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:42.903046 | orchestrator | 2026-02-18 03:15:42.903064 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-18 03:15:42.903083 | orchestrator | Wednesday 18 February 2026 03:15:41 +0000 (0:00:00.832) 0:00:40.780 **** 2026-02-18 03:15:42.903101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:42.903120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:42.903200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:42.903236 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:42.903253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:42.903270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:42.903287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:42.903304 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:42.903319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:42.903350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:42.903373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:42.903414 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:44.363793 | orchestrator | 2026-02-18 03:15:44.363879 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-18 03:15:44.363888 | orchestrator | Wednesday 18 February 2026 03:15:42 +0000 (0:00:00.920) 0:00:41.700 **** 2026-02-18 03:15:44.363899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:44.363909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:44.363916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:44.363923 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:44.363931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:44.363937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:44.363961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:44.363987 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:44.364009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:44.364017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:44.364024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:44.364030 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:44.364037 | orchestrator | 2026-02-18 03:15:44.364043 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-18 03:15:44.364049 | orchestrator | Wednesday 18 February 2026 03:15:43 +0000 (0:00:00.650) 0:00:42.351 **** 2026-02-18 03:15:44.364055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:44.364065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:44.364089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:44.364095 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:44.364111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:45.454279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:45.454408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:45.454428 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:45.454443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:45.454466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:45.454480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:45.454517 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:45.454529 | orchestrator | 2026-02-18 03:15:45.454542 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-18 03:15:45.454554 | orchestrator | Wednesday 18 February 2026 03:15:44 +0000 (0:00:00.839) 0:00:43.191 **** 2026-02-18 03:15:45.454618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:45.454654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:45.454666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:45.454677 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:45.454689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:45.454700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:45.454719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:45.454731 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:45.454748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:45.454766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:46.827042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:46.827137 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:46.827151 | orchestrator | 2026-02-18 03:15:46.827161 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-18 03:15:46.827172 | orchestrator | Wednesday 18 February 2026 03:15:45 +0000 (0:00:01.076) 0:00:44.267 **** 2026-02-18 03:15:46.827183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:46.827193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:46.827226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:46.827236 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:46.827246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:46.827268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:46.827293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:46.827303 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:46.827312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:46.827321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:46.827336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:46.827345 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:46.827354 | orchestrator | 2026-02-18 03:15:46.827363 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-18 03:15:46.827371 | orchestrator | Wednesday 18 February 2026 03:15:46 +0000 (0:00:00.606) 0:00:44.874 **** 2026-02-18 03:15:46.827380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 03:15:46.827390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:46.827412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:53.397125 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:53.397218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 03:15:53.397239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:53.397284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:53.397300 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:53.397314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 03:15:53.397329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 03:15:53.397359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 03:15:53.397374 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:53.397383 | orchestrator | 2026-02-18 03:15:53.397392 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-18 03:15:53.397401 | orchestrator | Wednesday 18 February 2026 03:15:46 +0000 (0:00:00.775) 0:00:45.650 **** 2026-02-18 03:15:53.397409 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-18 03:15:53.397432 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-18 03:15:53.397441 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-18 03:15:53.397448 | orchestrator | 2026-02-18 03:15:53.397456 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-18 03:15:53.397465 | orchestrator | Wednesday 18 February 2026 03:15:48 +0000 (0:00:01.691) 0:00:47.341 **** 2026-02-18 03:15:53.397473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-18 03:15:53.397481 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-18 03:15:53.397489 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-18 03:15:53.397496 | orchestrator | 2026-02-18 03:15:53.397512 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-18 03:15:53.397519 | orchestrator | Wednesday 18 February 2026 03:15:50 +0000 (0:00:01.737) 0:00:49.079 **** 2026-02-18 03:15:53.397527 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 03:15:53.397535 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 03:15:53.397542 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 03:15:53.397550 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 03:15:53.397558 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:53.397625 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 03:15:53.397633 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:53.397641 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 03:15:53.397649 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:15:53.397657 | orchestrator | 2026-02-18 03:15:53.397664 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-18 03:15:53.397672 | orchestrator | Wednesday 18 February 2026 03:15:51 +0000 (0:00:00.809) 0:00:49.888 **** 2026-02-18 03:15:53.397681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:53.397691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:53.397706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 03:15:53.397724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:57.685047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:57.685181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 03:15:57.685207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:57.685227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:57.685243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 03:15:57.685259 | orchestrator | 2026-02-18 03:15:57.685278 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-18 03:15:57.685317 | orchestrator | Wednesday 18 February 2026 03:15:53 +0000 (0:00:02.335) 0:00:52.224 **** 2026-02-18 03:15:57.685335 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:15:57.685351 | orchestrator | 2026-02-18 03:15:57.685367 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-18 03:15:57.685383 | orchestrator | Wednesday 18 February 2026 03:15:54 +0000 (0:00:00.863) 0:00:53.087 **** 2026-02-18 03:15:57.685427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 03:15:57.685478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 03:15:57.685499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:15:57.685516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 03:15:57.685533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 03:15:57.685560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 03:15:57.685613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:15:57.685656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 03:15:58.341848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 03:15:58.341929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 03:15:58.341939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:15:58.341959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 03:15:58.341966 | orchestrator | 2026-02-18 03:15:58.341973 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-18 03:15:58.341980 | orchestrator | Wednesday 18 February 2026 03:15:57 +0000 (0:00:03.416) 0:00:56.504 **** 2026-02-18 03:15:58.342004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 03:15:58.342068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 03:15:58.342077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:15:58.342083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 03:15:58.342090 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:15:58.342097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 03:15:58.342108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 03:15:58.342120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:15:58.342126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 03:15:58.342132 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:15:58.342145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 03:16:07.117040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 03:16:07.117173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.117192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.117229 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:07.117242 | orchestrator | 2026-02-18 03:16:07.117253 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-18 03:16:07.117268 | orchestrator | Wednesday 18 February 2026 03:15:58 +0000 (0:00:00.665) 0:00:57.170 **** 2026-02-18 03:16:07.117286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-18 03:16:07.117305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-18 03:16:07.117323 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:07.117360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-18 03:16:07.117378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-18 03:16:07.117395 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:07.117410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-18 03:16:07.117425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-18 03:16:07.117441 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:07.117456 | orchestrator | 2026-02-18 03:16:07.117472 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-18 03:16:07.117487 | orchestrator | Wednesday 18 February 2026 03:15:59 +0000 (0:00:01.176) 0:00:58.346 **** 2026-02-18 03:16:07.117503 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:07.117519 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:07.117535 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:07.117550 | orchestrator | 2026-02-18 03:16:07.117597 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-18 03:16:07.117616 | orchestrator | Wednesday 18 February 2026 03:16:00 +0000 (0:00:01.313) 0:00:59.660 **** 2026-02-18 03:16:07.117634 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:07.117650 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:07.117667 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:07.117684 | orchestrator | 2026-02-18 03:16:07.117701 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-18 03:16:07.117720 | orchestrator | Wednesday 18 February 2026 03:16:02 +0000 (0:00:02.073) 0:01:01.733 **** 2026-02-18 03:16:07.117736 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:16:07.117753 | orchestrator | 2026-02-18 03:16:07.117796 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-18 03:16:07.117815 | orchestrator | Wednesday 18 February 2026 03:16:03 +0000 (0:00:00.656) 0:01:02.390 **** 2026-02-18 03:16:07.117837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 03:16:07.117888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.117909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.117927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 03:16:07.117944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.117975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.780036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 03:16:07.780177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.780197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.780210 | orchestrator | 2026-02-18 03:16:07.780223 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-18 03:16:07.780235 | orchestrator | Wednesday 18 February 2026 03:16:07 +0000 (0:00:03.544) 0:01:05.935 **** 2026-02-18 03:16:07.780248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 03:16:07.780261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.780315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.780329 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:07.780348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 03:16:07.780360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.780371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:07.780382 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:07.780393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 03:16:07.780421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 03:16:17.748886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:17.749030 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:17.749061 | orchestrator | 2026-02-18 03:16:17.749082 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-18 03:16:17.749103 | orchestrator | Wednesday 18 February 2026 03:16:07 +0000 (0:00:00.669) 0:01:06.604 **** 2026-02-18 03:16:17.749145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-18 03:16:17.749168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-18 03:16:17.749189 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:17.749207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-18 03:16:17.749226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-18 03:16:17.749237 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:17.749248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-18 03:16:17.749259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-18 03:16:17.749270 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:17.749281 | orchestrator | 2026-02-18 03:16:17.749291 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-18 03:16:17.749302 | orchestrator | Wednesday 18 February 2026 03:16:08 +0000 (0:00:00.937) 0:01:07.542 **** 2026-02-18 03:16:17.749312 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:17.749324 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:17.749334 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:17.749345 | orchestrator | 2026-02-18 03:16:17.749355 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-18 03:16:17.749366 | orchestrator | Wednesday 18 February 2026 03:16:10 +0000 (0:00:01.706) 0:01:09.249 **** 2026-02-18 03:16:17.749404 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:17.749417 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:17.749429 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:17.749442 | orchestrator | 2026-02-18 03:16:17.749454 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-18 03:16:17.749467 | orchestrator | Wednesday 18 February 2026 03:16:12 +0000 (0:00:02.019) 0:01:11.269 **** 2026-02-18 03:16:17.749479 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:17.749491 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:17.749503 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:17.749516 | orchestrator | 2026-02-18 03:16:17.749527 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-18 03:16:17.749539 | orchestrator | Wednesday 18 February 2026 03:16:12 +0000 (0:00:00.324) 0:01:11.593 **** 2026-02-18 03:16:17.749551 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:16:17.749564 | orchestrator | 2026-02-18 03:16:17.749602 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-18 03:16:17.749615 | orchestrator | Wednesday 18 February 2026 03:16:13 +0000 (0:00:00.695) 0:01:12.289 **** 2026-02-18 03:16:17.749654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-18 03:16:17.749677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-18 03:16:17.749691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-18 03:16:17.749703 | orchestrator | 2026-02-18 03:16:17.749715 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-18 03:16:17.749728 | orchestrator | Wednesday 18 February 2026 03:16:16 +0000 (0:00:02.864) 0:01:15.153 **** 2026-02-18 03:16:17.749747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-18 03:16:17.749759 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:17.749770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-18 03:16:17.749781 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:17.749800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-18 03:16:25.685323 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:25.685449 | orchestrator | 2026-02-18 03:16:25.685475 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-18 03:16:25.685496 | orchestrator | Wednesday 18 February 2026 03:16:17 +0000 (0:00:01.419) 0:01:16.573 **** 2026-02-18 03:16:25.685539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 03:16:25.685562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 03:16:25.685644 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:25.685694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 03:16:25.685713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 03:16:25.685731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 03:16:25.685748 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:25.685766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 03:16:25.685783 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:25.685803 | orchestrator | 2026-02-18 03:16:25.685820 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-18 03:16:25.685840 | orchestrator | Wednesday 18 February 2026 03:16:19 +0000 (0:00:01.787) 0:01:18.360 **** 2026-02-18 03:16:25.685858 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:25.685876 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:25.685893 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:25.685911 | orchestrator | 2026-02-18 03:16:25.685933 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-18 03:16:25.685954 | orchestrator | Wednesday 18 February 2026 03:16:19 +0000 (0:00:00.432) 0:01:18.793 **** 2026-02-18 03:16:25.685974 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:25.685991 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:25.686199 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:25.686226 | orchestrator | 2026-02-18 03:16:25.686244 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-18 03:16:25.686262 | orchestrator | Wednesday 18 February 2026 03:16:21 +0000 (0:00:01.369) 0:01:20.163 **** 2026-02-18 03:16:25.686279 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:16:25.686297 | orchestrator | 2026-02-18 03:16:25.686314 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-18 03:16:25.686331 | orchestrator | Wednesday 18 February 2026 03:16:22 +0000 (0:00:00.958) 0:01:21.122 **** 2026-02-18 03:16:25.686396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 03:16:25.686445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:16:25.686465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 03:16:25.686485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 03:16:25.686503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 03:16:25.686535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.382784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.382951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.382974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 03:16:26.382987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.382999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.383032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.383053 | orchestrator | 2026-02-18 03:16:26.383074 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-18 03:16:26.383087 | orchestrator | Wednesday 18 February 2026 03:16:25 +0000 (0:00:03.483) 0:01:24.605 **** 2026-02-18 03:16:26.383100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 03:16:26.383112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.383124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.383136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 03:16:26.383147 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:26.383169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 03:16:32.692426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:16:32.692541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 03:16:32.692560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 03:16:32.692574 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:32.692647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 03:16:32.692660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:16:32.692725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 03:16:32.692739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 03:16:32.692750 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:32.692761 | orchestrator | 2026-02-18 03:16:32.692773 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-18 03:16:32.692786 | orchestrator | Wednesday 18 February 2026 03:16:26 +0000 (0:00:00.721) 0:01:25.327 **** 2026-02-18 03:16:32.692798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-18 03:16:32.692810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-18 03:16:32.692822 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:32.692840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-18 03:16:32.692861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-18 03:16:32.692880 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:32.692900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-18 03:16:32.692919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-18 03:16:32.692938 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:32.692958 | orchestrator | 2026-02-18 03:16:32.692978 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-18 03:16:32.692998 | orchestrator | Wednesday 18 February 2026 03:16:27 +0000 (0:00:01.181) 0:01:26.509 **** 2026-02-18 03:16:32.693019 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:32.693053 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:32.693073 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:32.693095 | orchestrator | 2026-02-18 03:16:32.693114 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-18 03:16:32.693133 | orchestrator | Wednesday 18 February 2026 03:16:28 +0000 (0:00:01.320) 0:01:27.829 **** 2026-02-18 03:16:32.693152 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:32.693172 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:32.693191 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:32.693209 | orchestrator | 2026-02-18 03:16:32.693229 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-18 03:16:32.693248 | orchestrator | Wednesday 18 February 2026 03:16:30 +0000 (0:00:02.007) 0:01:29.836 **** 2026-02-18 03:16:32.693268 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:32.693286 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:32.693303 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:32.693321 | orchestrator | 2026-02-18 03:16:32.693340 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-18 03:16:32.693358 | orchestrator | Wednesday 18 February 2026 03:16:31 +0000 (0:00:00.322) 0:01:30.159 **** 2026-02-18 03:16:32.693377 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:32.693395 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:32.693415 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:32.693435 | orchestrator | 2026-02-18 03:16:32.693454 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-18 03:16:32.693475 | orchestrator | Wednesday 18 February 2026 03:16:31 +0000 (0:00:00.317) 0:01:30.477 **** 2026-02-18 03:16:32.693494 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:16:32.693513 | orchestrator | 2026-02-18 03:16:32.693532 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-18 03:16:32.693552 | orchestrator | Wednesday 18 February 2026 03:16:32 +0000 (0:00:01.037) 0:01:31.515 **** 2026-02-18 03:16:36.089128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 03:16:36.089226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 03:16:36.089240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 03:16:36.089330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 03:16:36.089340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.089363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.984549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.984684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 03:16:36.984727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 03:16:36.984742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.984755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.984782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.984812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.984825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 03:16:36.984844 | orchestrator | 2026-02-18 03:16:36.984858 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-18 03:16:36.984869 | orchestrator | Wednesday 18 February 2026 03:16:36 +0000 (0:00:03.693) 0:01:35.209 **** 2026-02-18 03:16:36.984881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 03:16:36.984895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 03:16:36.984906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 03:16:36.984927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 03:16:37.554329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.554493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.554521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.554542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.554560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.555268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.555362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.555385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.555419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.555435 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:37.555460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 03:16:37.555477 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:37.555494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 03:16:37.555512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 03:16:37.555541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 03:16:47.701782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 03:16:47.701881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 03:16:47.701910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:16:47.701921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 03:16:47.701927 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:47.701933 | orchestrator | 2026-02-18 03:16:47.701938 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-18 03:16:47.701944 | orchestrator | Wednesday 18 February 2026 03:16:37 +0000 (0:00:01.173) 0:01:36.382 **** 2026-02-18 03:16:47.701949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-18 03:16:47.701956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-18 03:16:47.701961 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:47.701966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-18 03:16:47.701970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-18 03:16:47.701974 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:47.701979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-18 03:16:47.701998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-18 03:16:47.702003 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:47.702007 | orchestrator | 2026-02-18 03:16:47.702051 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-18 03:16:47.702069 | orchestrator | Wednesday 18 February 2026 03:16:38 +0000 (0:00:01.376) 0:01:37.759 **** 2026-02-18 03:16:47.702075 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:47.702080 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:47.702084 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:47.702089 | orchestrator | 2026-02-18 03:16:47.702093 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-18 03:16:47.702097 | orchestrator | Wednesday 18 February 2026 03:16:40 +0000 (0:00:01.299) 0:01:39.058 **** 2026-02-18 03:16:47.702101 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:16:47.702106 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:16:47.702110 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:16:47.702114 | orchestrator | 2026-02-18 03:16:47.702119 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-18 03:16:47.702123 | orchestrator | Wednesday 18 February 2026 03:16:42 +0000 (0:00:02.122) 0:01:41.181 **** 2026-02-18 03:16:47.702127 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:47.702131 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:47.702135 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:16:47.702140 | orchestrator | 2026-02-18 03:16:47.702144 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-18 03:16:47.702148 | orchestrator | Wednesday 18 February 2026 03:16:42 +0000 (0:00:00.346) 0:01:41.528 **** 2026-02-18 03:16:47.702153 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:16:47.702157 | orchestrator | 2026-02-18 03:16:47.702161 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-18 03:16:47.702166 | orchestrator | Wednesday 18 February 2026 03:16:43 +0000 (0:00:01.062) 0:01:42.591 **** 2026-02-18 03:16:47.702177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 03:16:47.702189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 03:16:50.760835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 03:16:50.760949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 03:16:50.761036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 03:16:50.761055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 03:16:50.761078 | orchestrator | 2026-02-18 03:16:50.761092 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-18 03:16:50.761112 | orchestrator | Wednesday 18 February 2026 03:16:47 +0000 (0:00:04.064) 0:01:46.655 **** 2026-02-18 03:16:50.761163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 03:16:50.868192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 03:16:50.868294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 03:16:50.868332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 03:16:50.868348 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:16:50.868356 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:16:50.868364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 03:16:50.868381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 03:17:03.288639 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:03.288724 | orchestrator | 2026-02-18 03:17:03.288733 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-18 03:17:03.288740 | orchestrator | Wednesday 18 February 2026 03:16:50 +0000 (0:00:03.041) 0:01:49.697 **** 2026-02-18 03:17:03.288749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 03:17:03.288758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 03:17:03.288766 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:03.288772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 03:17:03.288778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 03:17:03.288784 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:03.288790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 03:17:03.288810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 03:17:03.288817 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:03.288823 | orchestrator | 2026-02-18 03:17:03.288829 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-18 03:17:03.288835 | orchestrator | Wednesday 18 February 2026 03:16:55 +0000 (0:00:04.256) 0:01:53.954 **** 2026-02-18 03:17:03.288858 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:03.288864 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:03.288869 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:03.288875 | orchestrator | 2026-02-18 03:17:03.288881 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-18 03:17:03.288887 | orchestrator | Wednesday 18 February 2026 03:16:56 +0000 (0:00:01.371) 0:01:55.325 **** 2026-02-18 03:17:03.288892 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:03.288898 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:03.288904 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:03.288909 | orchestrator | 2026-02-18 03:17:03.288915 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-18 03:17:03.288932 | orchestrator | Wednesday 18 February 2026 03:16:58 +0000 (0:00:02.021) 0:01:57.346 **** 2026-02-18 03:17:03.288938 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:03.288944 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:03.288949 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:03.288955 | orchestrator | 2026-02-18 03:17:03.288961 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-18 03:17:03.288966 | orchestrator | Wednesday 18 February 2026 03:16:58 +0000 (0:00:00.329) 0:01:57.676 **** 2026-02-18 03:17:03.288972 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:17:03.288978 | orchestrator | 2026-02-18 03:17:03.288984 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-18 03:17:03.288989 | orchestrator | Wednesday 18 February 2026 03:16:59 +0000 (0:00:01.053) 0:01:58.729 **** 2026-02-18 03:17:03.288996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 03:17:03.289003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 03:17:03.289009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 03:17:03.289015 | orchestrator | 2026-02-18 03:17:03.289021 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-18 03:17:03.289033 | orchestrator | Wednesday 18 February 2026 03:17:02 +0000 (0:00:02.987) 0:02:01.717 **** 2026-02-18 03:17:03.289040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 03:17:03.289047 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:03.289057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 03:17:12.479728 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:12.479927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 03:17:12.479955 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:12.479968 | orchestrator | 2026-02-18 03:17:12.479980 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-18 03:17:12.479993 | orchestrator | Wednesday 18 February 2026 03:17:03 +0000 (0:00:00.396) 0:02:02.113 **** 2026-02-18 03:17:12.480005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-18 03:17:12.480019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-18 03:17:12.480031 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:12.480042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-18 03:17:12.480053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-18 03:17:12.480064 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:12.480075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-18 03:17:12.480086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-18 03:17:12.480117 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:12.480128 | orchestrator | 2026-02-18 03:17:12.480139 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-18 03:17:12.480150 | orchestrator | Wednesday 18 February 2026 03:17:04 +0000 (0:00:00.906) 0:02:03.020 **** 2026-02-18 03:17:12.480161 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:12.480172 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:12.480183 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:12.480194 | orchestrator | 2026-02-18 03:17:12.480205 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-18 03:17:12.480218 | orchestrator | Wednesday 18 February 2026 03:17:05 +0000 (0:00:01.339) 0:02:04.360 **** 2026-02-18 03:17:12.480230 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:12.480242 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:12.480255 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:12.480268 | orchestrator | 2026-02-18 03:17:12.480280 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-18 03:17:12.480299 | orchestrator | Wednesday 18 February 2026 03:17:07 +0000 (0:00:02.167) 0:02:06.527 **** 2026-02-18 03:17:12.480312 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:12.480325 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:12.480337 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:12.480350 | orchestrator | 2026-02-18 03:17:12.480362 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-18 03:17:12.480375 | orchestrator | Wednesday 18 February 2026 03:17:08 +0000 (0:00:00.330) 0:02:06.858 **** 2026-02-18 03:17:12.480387 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:17:12.480399 | orchestrator | 2026-02-18 03:17:12.480412 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-18 03:17:12.480425 | orchestrator | Wednesday 18 February 2026 03:17:09 +0000 (0:00:01.143) 0:02:08.002 **** 2026-02-18 03:17:12.480462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 03:17:12.480496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 03:17:12.480521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 03:17:14.218943 | orchestrator | 2026-02-18 03:17:14.219056 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-18 03:17:14.219072 | orchestrator | Wednesday 18 February 2026 03:17:12 +0000 (0:00:03.305) 0:02:11.307 **** 2026-02-18 03:17:14.219108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 03:17:14.219125 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:14.219159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 03:17:14.219197 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:14.219217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 03:17:14.219230 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:14.219241 | orchestrator | 2026-02-18 03:17:14.219252 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-18 03:17:14.219263 | orchestrator | Wednesday 18 February 2026 03:17:13 +0000 (0:00:00.668) 0:02:11.975 **** 2026-02-18 03:17:14.219276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-18 03:17:14.219298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 03:17:14.219312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-18 03:17:14.219332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 03:17:23.075231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-18 03:17:23.075352 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:23.075369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-18 03:17:23.075382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 03:17:23.075415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-18 03:17:23.075426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 03:17:23.075434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-18 03:17:23.075442 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:23.075449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-18 03:17:23.075456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 03:17:23.075463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-18 03:17:23.075499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 03:17:23.075506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-18 03:17:23.075513 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:23.075520 | orchestrator | 2026-02-18 03:17:23.075528 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-18 03:17:23.075537 | orchestrator | Wednesday 18 February 2026 03:17:14 +0000 (0:00:01.071) 0:02:13.047 **** 2026-02-18 03:17:23.075543 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:23.075549 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:23.075556 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:23.075562 | orchestrator | 2026-02-18 03:17:23.075570 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-18 03:17:23.075577 | orchestrator | Wednesday 18 February 2026 03:17:15 +0000 (0:00:01.635) 0:02:14.682 **** 2026-02-18 03:17:23.075584 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:23.075590 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:23.075621 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:23.075628 | orchestrator | 2026-02-18 03:17:23.075634 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-18 03:17:23.075640 | orchestrator | Wednesday 18 February 2026 03:17:17 +0000 (0:00:02.072) 0:02:16.755 **** 2026-02-18 03:17:23.075646 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:23.075653 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:23.075681 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:23.075688 | orchestrator | 2026-02-18 03:17:23.075695 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-18 03:17:23.075702 | orchestrator | Wednesday 18 February 2026 03:17:18 +0000 (0:00:00.325) 0:02:17.080 **** 2026-02-18 03:17:23.075709 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:23.075716 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:23.075722 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:23.075729 | orchestrator | 2026-02-18 03:17:23.075736 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-18 03:17:23.075743 | orchestrator | Wednesday 18 February 2026 03:17:18 +0000 (0:00:00.321) 0:02:17.401 **** 2026-02-18 03:17:23.075751 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:17:23.075760 | orchestrator | 2026-02-18 03:17:23.075768 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-18 03:17:23.075775 | orchestrator | Wednesday 18 February 2026 03:17:19 +0000 (0:00:01.196) 0:02:18.597 **** 2026-02-18 03:17:23.075797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:17:23.075817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:17:23.075826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:17:23.075837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:17:23.075854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:17:23.721810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:17:23.721976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:17:23.722102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:17:23.722128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:17:23.722144 | orchestrator | 2026-02-18 03:17:23.722156 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-18 03:17:23.722168 | orchestrator | Wednesday 18 February 2026 03:17:23 +0000 (0:00:03.299) 0:02:21.896 **** 2026-02-18 03:17:23.722201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:17:23.722221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:17:23.722234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:17:23.722257 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:23.722271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:17:23.722283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:17:23.722295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:17:23.722306 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:23.722332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:17:33.078811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:17:33.078893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:17:33.078902 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:33.078909 | orchestrator | 2026-02-18 03:17:33.078914 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-18 03:17:33.078921 | orchestrator | Wednesday 18 February 2026 03:17:23 +0000 (0:00:00.647) 0:02:22.544 **** 2026-02-18 03:17:33.078926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-18 03:17:33.078934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-18 03:17:33.078940 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:33.078945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-18 03:17:33.078950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-18 03:17:33.078954 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:33.078959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-18 03:17:33.078964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-18 03:17:33.078968 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:33.078973 | orchestrator | 2026-02-18 03:17:33.078977 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-18 03:17:33.078982 | orchestrator | Wednesday 18 February 2026 03:17:24 +0000 (0:00:01.086) 0:02:23.630 **** 2026-02-18 03:17:33.078987 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:33.078991 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:33.079014 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:33.079019 | orchestrator | 2026-02-18 03:17:33.079024 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-18 03:17:33.079028 | orchestrator | Wednesday 18 February 2026 03:17:26 +0000 (0:00:01.318) 0:02:24.949 **** 2026-02-18 03:17:33.079033 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:33.079037 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:33.079042 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:33.079046 | orchestrator | 2026-02-18 03:17:33.079051 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-18 03:17:33.079055 | orchestrator | Wednesday 18 February 2026 03:17:28 +0000 (0:00:02.095) 0:02:27.044 **** 2026-02-18 03:17:33.079059 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:33.079074 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:33.079079 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:33.079083 | orchestrator | 2026-02-18 03:17:33.079088 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-18 03:17:33.079102 | orchestrator | Wednesday 18 February 2026 03:17:28 +0000 (0:00:00.327) 0:02:27.372 **** 2026-02-18 03:17:33.079107 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:17:33.079111 | orchestrator | 2026-02-18 03:17:33.079116 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-18 03:17:33.079120 | orchestrator | Wednesday 18 February 2026 03:17:29 +0000 (0:00:01.264) 0:02:28.637 **** 2026-02-18 03:17:33.079126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 03:17:33.079135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:17:33.079141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 03:17:33.079152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 03:17:33.079161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:17:38.438283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:17:38.438357 | orchestrator | 2026-02-18 03:17:38.438364 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-18 03:17:38.438370 | orchestrator | Wednesday 18 February 2026 03:17:33 +0000 (0:00:03.264) 0:02:31.901 **** 2026-02-18 03:17:38.438376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 03:17:38.438410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:17:38.438429 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:38.438437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 03:17:38.438451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:17:38.438456 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:38.438460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 03:17:38.438464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:17:38.438472 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:38.438476 | orchestrator | 2026-02-18 03:17:38.438480 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-18 03:17:38.438483 | orchestrator | Wednesday 18 February 2026 03:17:33 +0000 (0:00:00.712) 0:02:32.614 **** 2026-02-18 03:17:38.438488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-18 03:17:38.438494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-18 03:17:38.438498 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:38.438502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-18 03:17:38.438506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-18 03:17:38.438510 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:38.438514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-18 03:17:38.438518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-18 03:17:38.438522 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:38.438525 | orchestrator | 2026-02-18 03:17:38.438532 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-18 03:17:38.438536 | orchestrator | Wednesday 18 February 2026 03:17:34 +0000 (0:00:00.908) 0:02:33.522 **** 2026-02-18 03:17:38.438540 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:38.438543 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:38.438547 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:38.438551 | orchestrator | 2026-02-18 03:17:38.438555 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-18 03:17:38.438559 | orchestrator | Wednesday 18 February 2026 03:17:36 +0000 (0:00:01.641) 0:02:35.164 **** 2026-02-18 03:17:38.438562 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:38.438566 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:38.438570 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:38.438574 | orchestrator | 2026-02-18 03:17:38.438578 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-18 03:17:38.438584 | orchestrator | Wednesday 18 February 2026 03:17:38 +0000 (0:00:02.097) 0:02:37.262 **** 2026-02-18 03:17:43.043394 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:17:43.043551 | orchestrator | 2026-02-18 03:17:43.043581 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-18 03:17:43.043664 | orchestrator | Wednesday 18 February 2026 03:17:39 +0000 (0:00:01.077) 0:02:38.339 **** 2026-02-18 03:17:43.043696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 03:17:43.043757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 03:17:43.043853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 03:17:43.043911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 03:17:43.043951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.156951 | orchestrator | 2026-02-18 03:17:44.157032 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-18 03:17:44.157041 | orchestrator | Wednesday 18 February 2026 03:17:43 +0000 (0:00:03.617) 0:02:41.957 **** 2026-02-18 03:17:44.157066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 03:17:44.157075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157094 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:44.157112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 03:17:44.157137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157169 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:44.157176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 03:17:44.157189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 03:17:44.157215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 03:17:55.544918 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:55.545006 | orchestrator | 2026-02-18 03:17:55.545017 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-18 03:17:55.545025 | orchestrator | Wednesday 18 February 2026 03:17:44 +0000 (0:00:01.122) 0:02:43.079 **** 2026-02-18 03:17:55.545033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-18 03:17:55.545042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-18 03:17:55.545050 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:55.545058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-18 03:17:55.545065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-18 03:17:55.545071 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:55.545078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-18 03:17:55.545084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-18 03:17:55.545091 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:55.545098 | orchestrator | 2026-02-18 03:17:55.545104 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-18 03:17:55.545111 | orchestrator | Wednesday 18 February 2026 03:17:45 +0000 (0:00:00.935) 0:02:44.015 **** 2026-02-18 03:17:55.545118 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:55.545124 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:55.545131 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:55.545137 | orchestrator | 2026-02-18 03:17:55.545144 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-18 03:17:55.545150 | orchestrator | Wednesday 18 February 2026 03:17:46 +0000 (0:00:01.330) 0:02:45.345 **** 2026-02-18 03:17:55.545157 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:17:55.545163 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:17:55.545170 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:17:55.545176 | orchestrator | 2026-02-18 03:17:55.545183 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-18 03:17:55.545189 | orchestrator | Wednesday 18 February 2026 03:17:48 +0000 (0:00:02.086) 0:02:47.432 **** 2026-02-18 03:17:55.545196 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:17:55.545203 | orchestrator | 2026-02-18 03:17:55.545209 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-18 03:17:55.545215 | orchestrator | Wednesday 18 February 2026 03:17:49 +0000 (0:00:01.351) 0:02:48.783 **** 2026-02-18 03:17:55.545223 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 03:17:55.545229 | orchestrator | 2026-02-18 03:17:55.545252 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-18 03:17:55.545259 | orchestrator | Wednesday 18 February 2026 03:17:53 +0000 (0:00:03.223) 0:02:52.007 **** 2026-02-18 03:17:55.545289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:17:55.545300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 03:17:55.545308 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:55.545318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:17:55.545330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 03:17:55.545337 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:17:55.545351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:17:57.883809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 03:17:57.883937 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:57.883963 | orchestrator | 2026-02-18 03:17:57.883975 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-18 03:17:57.883987 | orchestrator | Wednesday 18 February 2026 03:17:55 +0000 (0:00:02.355) 0:02:54.362 **** 2026-02-18 03:17:57.884045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:17:57.884067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 03:17:57.884121 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:17:57.884168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:17:57.884216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 03:17:57.884235 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:17:57.884254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:17:57.884284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 03:18:08.314282 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:08.314366 | orchestrator | 2026-02-18 03:18:08.314377 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-18 03:18:08.314385 | orchestrator | Wednesday 18 February 2026 03:17:57 +0000 (0:00:02.343) 0:02:56.706 **** 2026-02-18 03:18:08.314394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 03:18:08.314434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 03:18:08.314441 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:08.314448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 03:18:08.314455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 03:18:08.314461 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:08.314468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 03:18:08.314474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 03:18:08.314481 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:08.314487 | orchestrator | 2026-02-18 03:18:08.314493 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-18 03:18:08.314499 | orchestrator | Wednesday 18 February 2026 03:18:00 +0000 (0:00:02.974) 0:02:59.680 **** 2026-02-18 03:18:08.314506 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:18:08.314529 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:18:08.314536 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:18:08.314542 | orchestrator | 2026-02-18 03:18:08.314549 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-18 03:18:08.314555 | orchestrator | Wednesday 18 February 2026 03:18:02 +0000 (0:00:02.125) 0:03:01.806 **** 2026-02-18 03:18:08.314561 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:08.314567 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:08.314574 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:08.314580 | orchestrator | 2026-02-18 03:18:08.314586 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-18 03:18:08.314592 | orchestrator | Wednesday 18 February 2026 03:18:04 +0000 (0:00:01.461) 0:03:03.267 **** 2026-02-18 03:18:08.314599 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:08.314622 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:08.314629 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:08.314636 | orchestrator | 2026-02-18 03:18:08.314642 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-18 03:18:08.314648 | orchestrator | Wednesday 18 February 2026 03:18:04 +0000 (0:00:00.350) 0:03:03.618 **** 2026-02-18 03:18:08.314654 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:18:08.314661 | orchestrator | 2026-02-18 03:18:08.314667 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-18 03:18:08.314673 | orchestrator | Wednesday 18 February 2026 03:18:06 +0000 (0:00:01.624) 0:03:05.242 **** 2026-02-18 03:18:08.314685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 03:18:08.314694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 03:18:08.314701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 03:18:08.314708 | orchestrator | 2026-02-18 03:18:08.314715 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-18 03:18:08.314727 | orchestrator | Wednesday 18 February 2026 03:18:08 +0000 (0:00:01.698) 0:03:06.940 **** 2026-02-18 03:18:08.314738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 03:18:17.225583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 03:18:17.225723 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:17.225742 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:17.225756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 03:18:17.225767 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:17.225778 | orchestrator | 2026-02-18 03:18:17.225790 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-18 03:18:17.225799 | orchestrator | Wednesday 18 February 2026 03:18:08 +0000 (0:00:00.399) 0:03:07.340 **** 2026-02-18 03:18:17.225807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-18 03:18:17.225815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-18 03:18:17.225822 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:17.225832 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:17.225843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-18 03:18:17.225877 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:17.225885 | orchestrator | 2026-02-18 03:18:17.225940 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-18 03:18:17.225953 | orchestrator | Wednesday 18 February 2026 03:18:09 +0000 (0:00:00.883) 0:03:08.224 **** 2026-02-18 03:18:17.225964 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:17.225974 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:17.225984 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:17.225992 | orchestrator | 2026-02-18 03:18:17.225999 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-18 03:18:17.226005 | orchestrator | Wednesday 18 February 2026 03:18:09 +0000 (0:00:00.571) 0:03:08.795 **** 2026-02-18 03:18:17.226011 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:17.226085 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:17.226096 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:17.226107 | orchestrator | 2026-02-18 03:18:17.226118 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-18 03:18:17.226128 | orchestrator | Wednesday 18 February 2026 03:18:11 +0000 (0:00:01.407) 0:03:10.203 **** 2026-02-18 03:18:17.226139 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:17.226150 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:17.226160 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:17.226171 | orchestrator | 2026-02-18 03:18:17.226183 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-18 03:18:17.226194 | orchestrator | Wednesday 18 February 2026 03:18:11 +0000 (0:00:00.329) 0:03:10.533 **** 2026-02-18 03:18:17.226207 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:18:17.226218 | orchestrator | 2026-02-18 03:18:17.226230 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-18 03:18:17.226241 | orchestrator | Wednesday 18 February 2026 03:18:13 +0000 (0:00:01.535) 0:03:12.068 **** 2026-02-18 03:18:17.226273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 03:18:17.226295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 03:18:17.226307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.226330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.226338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.226351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.353503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.353677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.353720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-18 03:18:17.353734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-18 03:18:17.353747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.353783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.353805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.353819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 03:18:17.353839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.353852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.353980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.354006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:17.441145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.441319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.441341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.441354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-18 03:18:17.441366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-18 03:18:17.441380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.441421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.441442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.441454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.441467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.441479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 03:18:17.441494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.441514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.636738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:17.636820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.636831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.636839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:17.636848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:17.636879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.636904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.636912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-18 03:18:17.636919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-18 03:18:17.636928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.636939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:17.636949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:17.636971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.758552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 03:18:18.758667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 03:18:18.758679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:18.758686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:18.758691 | orchestrator | 2026-02-18 03:18:18.758697 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-18 03:18:18.758720 | orchestrator | Wednesday 18 February 2026 03:18:17 +0000 (0:00:04.394) 0:03:16.463 **** 2026-02-18 03:18:18.758749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 03:18:18.758757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.758763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.758769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.758774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-18 03:18:18.758787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.758797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:18.840568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:18.840714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.840731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:18.840744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.840791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 03:18:18.840845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-18 03:18:18.840858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.840869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:18.840880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.840891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.840908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.840930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 03:18:18.937580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-18 03:18:18.937727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:18.937747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 03:18:18.937784 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:18.937798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.937825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.937862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.937877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:18.937890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.937910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:18.937922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-18 03:18:18.937934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:18.937956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:19.132881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:19.133013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:19.133051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:19.133065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-18 03:18:19.133076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:19.133092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:19.133103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:19.133130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:19.133141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:19.133157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:19.133172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 03:18:19.133185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-18 03:18:19.133200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:29.423831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-18 03:18:29.423984 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:29.424037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 03:18:29.424111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 03:18:29.424156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 03:18:29.424177 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:29.424196 | orchestrator | 2026-02-18 03:18:29.424216 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-18 03:18:29.424236 | orchestrator | Wednesday 18 February 2026 03:18:19 +0000 (0:00:01.494) 0:03:17.958 **** 2026-02-18 03:18:29.424256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-18 03:18:29.424276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-18 03:18:29.424295 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:29.424315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-18 03:18:29.424333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-18 03:18:29.424352 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:29.424396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-18 03:18:29.424417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-18 03:18:29.424457 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:29.424477 | orchestrator | 2026-02-18 03:18:29.424498 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-18 03:18:29.424512 | orchestrator | Wednesday 18 February 2026 03:18:21 +0000 (0:00:02.067) 0:03:20.025 **** 2026-02-18 03:18:29.424524 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:18:29.424536 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:18:29.424551 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:18:29.424570 | orchestrator | 2026-02-18 03:18:29.424844 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-18 03:18:29.424863 | orchestrator | Wednesday 18 February 2026 03:18:22 +0000 (0:00:01.328) 0:03:21.354 **** 2026-02-18 03:18:29.424874 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:18:29.424886 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:18:29.424896 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:18:29.424907 | orchestrator | 2026-02-18 03:18:29.424917 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-18 03:18:29.424928 | orchestrator | Wednesday 18 February 2026 03:18:24 +0000 (0:00:02.118) 0:03:23.472 **** 2026-02-18 03:18:29.424939 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:18:29.424949 | orchestrator | 2026-02-18 03:18:29.424960 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-18 03:18:29.424971 | orchestrator | Wednesday 18 February 2026 03:18:25 +0000 (0:00:01.304) 0:03:24.777 **** 2026-02-18 03:18:29.424984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:18:29.425009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:18:29.425021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:18:29.425045 | orchestrator | 2026-02-18 03:18:29.425073 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-18 03:18:40.606131 | orchestrator | Wednesday 18 February 2026 03:18:29 +0000 (0:00:03.465) 0:03:28.243 **** 2026-02-18 03:18:40.606246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:18:40.606264 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:40.606275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:18:40.606284 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:40.606310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:18:40.606327 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:40.606341 | orchestrator | 2026-02-18 03:18:40.606356 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-18 03:18:40.606371 | orchestrator | Wednesday 18 February 2026 03:18:29 +0000 (0:00:00.519) 0:03:28.762 **** 2026-02-18 03:18:40.606388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-18 03:18:40.606434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-18 03:18:40.606451 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:40.606466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-18 03:18:40.606483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-18 03:18:40.606498 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:40.606537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-18 03:18:40.606554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-18 03:18:40.606571 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:40.606586 | orchestrator | 2026-02-18 03:18:40.606602 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-18 03:18:40.606646 | orchestrator | Wednesday 18 February 2026 03:18:30 +0000 (0:00:00.808) 0:03:29.571 **** 2026-02-18 03:18:40.606663 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:18:40.606678 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:18:40.606693 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:18:40.606709 | orchestrator | 2026-02-18 03:18:40.606724 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-18 03:18:40.606741 | orchestrator | Wednesday 18 February 2026 03:18:32 +0000 (0:00:01.996) 0:03:31.567 **** 2026-02-18 03:18:40.606752 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:18:40.606762 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:18:40.606772 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:18:40.606782 | orchestrator | 2026-02-18 03:18:40.606792 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-18 03:18:40.606802 | orchestrator | Wednesday 18 February 2026 03:18:34 +0000 (0:00:01.904) 0:03:33.472 **** 2026-02-18 03:18:40.606812 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:18:40.606823 | orchestrator | 2026-02-18 03:18:40.606832 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-18 03:18:40.606842 | orchestrator | Wednesday 18 February 2026 03:18:36 +0000 (0:00:01.554) 0:03:35.027 **** 2026-02-18 03:18:40.606857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 03:18:40.606888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:18:40.606971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:18:40.607000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 03:18:41.838928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:18:41.839052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:18:41.839127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 03:18:41.839151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:18:41.839168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:18:41.839187 | orchestrator | 2026-02-18 03:18:41.839205 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-18 03:18:41.839224 | orchestrator | Wednesday 18 February 2026 03:18:40 +0000 (0:00:04.402) 0:03:39.430 **** 2026-02-18 03:18:41.839270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 03:18:41.839297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:18:41.839314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:18:41.839325 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:41.839337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 03:18:41.839355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:18:54.129500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:18:54.129592 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:54.129657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 03:18:54.129685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 03:18:54.129693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 03:18:54.129699 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:54.129706 | orchestrator | 2026-02-18 03:18:54.129713 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-18 03:18:54.129721 | orchestrator | Wednesday 18 February 2026 03:18:41 +0000 (0:00:01.229) 0:03:40.659 **** 2026-02-18 03:18:54.129729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129783 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:18:54.129790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129829 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:18:54.129835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-18 03:18:54.129864 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:18:54.129871 | orchestrator | 2026-02-18 03:18:54.129877 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-18 03:18:54.129883 | orchestrator | Wednesday 18 February 2026 03:18:43 +0000 (0:00:01.404) 0:03:42.064 **** 2026-02-18 03:18:54.129889 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:18:54.129895 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:18:54.129902 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:18:54.129908 | orchestrator | 2026-02-18 03:18:54.129914 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-18 03:18:54.129920 | orchestrator | Wednesday 18 February 2026 03:18:44 +0000 (0:00:01.550) 0:03:43.615 **** 2026-02-18 03:18:54.129926 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:18:54.129932 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:18:54.129938 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:18:54.129944 | orchestrator | 2026-02-18 03:18:54.129950 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-18 03:18:54.129957 | orchestrator | Wednesday 18 February 2026 03:18:46 +0000 (0:00:02.186) 0:03:45.802 **** 2026-02-18 03:18:54.129963 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:18:54.129969 | orchestrator | 2026-02-18 03:18:54.129975 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-18 03:18:54.129981 | orchestrator | Wednesday 18 February 2026 03:18:48 +0000 (0:00:01.720) 0:03:47.523 **** 2026-02-18 03:18:54.129988 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-18 03:18:54.129996 | orchestrator | 2026-02-18 03:18:54.130002 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-18 03:18:54.130008 | orchestrator | Wednesday 18 February 2026 03:18:49 +0000 (0:00:00.901) 0:03:48.424 **** 2026-02-18 03:18:54.130061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-18 03:18:54.130082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-18 03:19:06.825813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-18 03:19:06.825947 | orchestrator | 2026-02-18 03:19:06.825978 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-18 03:19:06.825999 | orchestrator | Wednesday 18 February 2026 03:18:54 +0000 (0:00:04.529) 0:03:52.954 **** 2026-02-18 03:19:06.826089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.826118 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:06.826165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.826188 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:06.826210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.826231 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:06.826250 | orchestrator | 2026-02-18 03:19:06.826270 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-18 03:19:06.826291 | orchestrator | Wednesday 18 February 2026 03:18:55 +0000 (0:00:01.535) 0:03:54.490 **** 2026-02-18 03:19:06.826312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 03:19:06.826336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 03:19:06.826390 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:06.826410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 03:19:06.826432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 03:19:06.826453 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:06.826473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 03:19:06.826493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 03:19:06.826536 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:06.826557 | orchestrator | 2026-02-18 03:19:06.826577 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-18 03:19:06.826596 | orchestrator | Wednesday 18 February 2026 03:18:57 +0000 (0:00:01.708) 0:03:56.198 **** 2026-02-18 03:19:06.826618 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:19:06.826691 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:19:06.826702 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:19:06.826713 | orchestrator | 2026-02-18 03:19:06.826724 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-18 03:19:06.826735 | orchestrator | Wednesday 18 February 2026 03:18:59 +0000 (0:00:02.611) 0:03:58.810 **** 2026-02-18 03:19:06.826746 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:19:06.826757 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:19:06.826767 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:19:06.826778 | orchestrator | 2026-02-18 03:19:06.826788 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-18 03:19:06.826799 | orchestrator | Wednesday 18 February 2026 03:19:02 +0000 (0:00:02.898) 0:04:01.709 **** 2026-02-18 03:19:06.826811 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-18 03:19:06.826823 | orchestrator | 2026-02-18 03:19:06.826834 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-18 03:19:06.826844 | orchestrator | Wednesday 18 February 2026 03:19:04 +0000 (0:00:01.159) 0:04:02.868 **** 2026-02-18 03:19:06.826865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.826878 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:06.826889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.826911 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:06.826923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.826934 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:06.826945 | orchestrator | 2026-02-18 03:19:06.826956 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-18 03:19:06.826967 | orchestrator | Wednesday 18 February 2026 03:19:05 +0000 (0:00:01.502) 0:04:04.371 **** 2026-02-18 03:19:06.826978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.826989 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:06.827000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:06.827020 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:30.749122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 03:19:30.749234 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:30.749251 | orchestrator | 2026-02-18 03:19:30.749264 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-18 03:19:30.749277 | orchestrator | Wednesday 18 February 2026 03:19:06 +0000 (0:00:01.276) 0:04:05.648 **** 2026-02-18 03:19:30.749289 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:30.749300 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:30.749310 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:30.749321 | orchestrator | 2026-02-18 03:19:30.749332 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-18 03:19:30.749343 | orchestrator | Wednesday 18 February 2026 03:19:08 +0000 (0:00:01.558) 0:04:07.206 **** 2026-02-18 03:19:30.749354 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:19:30.749366 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:19:30.749376 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:19:30.749387 | orchestrator | 2026-02-18 03:19:30.749398 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-18 03:19:30.749409 | orchestrator | Wednesday 18 February 2026 03:19:11 +0000 (0:00:02.750) 0:04:09.957 **** 2026-02-18 03:19:30.749445 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:19:30.749456 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:19:30.749467 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:19:30.749477 | orchestrator | 2026-02-18 03:19:30.749502 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-18 03:19:30.749513 | orchestrator | Wednesday 18 February 2026 03:19:13 +0000 (0:00:02.757) 0:04:12.715 **** 2026-02-18 03:19:30.749524 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-18 03:19:30.749537 | orchestrator | 2026-02-18 03:19:30.749547 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-18 03:19:30.749558 | orchestrator | Wednesday 18 February 2026 03:19:15 +0000 (0:00:01.334) 0:04:14.049 **** 2026-02-18 03:19:30.749570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 03:19:30.749581 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:30.749592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 03:19:30.749603 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:30.749614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 03:19:30.749625 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:30.749665 | orchestrator | 2026-02-18 03:19:30.749678 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-18 03:19:30.749691 | orchestrator | Wednesday 18 February 2026 03:19:16 +0000 (0:00:01.352) 0:04:15.401 **** 2026-02-18 03:19:30.749723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 03:19:30.749737 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:30.749750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 03:19:30.749771 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:30.749784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 03:19:30.749796 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:30.749809 | orchestrator | 2026-02-18 03:19:30.749828 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-18 03:19:30.749840 | orchestrator | Wednesday 18 February 2026 03:19:17 +0000 (0:00:01.346) 0:04:16.747 **** 2026-02-18 03:19:30.749853 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:30.749865 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:30.749877 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:30.749890 | orchestrator | 2026-02-18 03:19:30.749902 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-18 03:19:30.749914 | orchestrator | Wednesday 18 February 2026 03:19:19 +0000 (0:00:01.946) 0:04:18.694 **** 2026-02-18 03:19:30.749926 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:19:30.749938 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:19:30.749951 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:19:30.749963 | orchestrator | 2026-02-18 03:19:30.749975 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-18 03:19:30.749987 | orchestrator | Wednesday 18 February 2026 03:19:22 +0000 (0:00:02.425) 0:04:21.120 **** 2026-02-18 03:19:30.749999 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:19:30.750012 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:19:30.750082 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:19:30.750093 | orchestrator | 2026-02-18 03:19:30.750104 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-18 03:19:30.750115 | orchestrator | Wednesday 18 February 2026 03:19:25 +0000 (0:00:03.215) 0:04:24.335 **** 2026-02-18 03:19:30.750126 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:19:30.750137 | orchestrator | 2026-02-18 03:19:30.750148 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-18 03:19:30.750159 | orchestrator | Wednesday 18 February 2026 03:19:26 +0000 (0:00:01.347) 0:04:25.683 **** 2026-02-18 03:19:30.750181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 03:19:30.750193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 03:19:30.750231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 03:19:31.466967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 03:19:31.467052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.467060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.467065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.467070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:19:31.467088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.467103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:19:31.467108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 03:19:31.467112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 03:19:31.467116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.467141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.467150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:19:31.467154 | orchestrator | 2026-02-18 03:19:31.467159 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-18 03:19:31.467164 | orchestrator | Wednesday 18 February 2026 03:19:30 +0000 (0:00:04.029) 0:04:29.712 **** 2026-02-18 03:19:31.467177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 03:19:31.609343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 03:19:31.609445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.609462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.609475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:19:31.609512 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:31.609526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 03:19:31.609540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 03:19:31.609582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.609596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.609607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:19:31.609658 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:31.609671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 03:19:31.609682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 03:19:31.609694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 03:19:31.609719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 03:19:43.587791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 03:19:43.587899 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:43.587917 | orchestrator | 2026-02-18 03:19:43.587930 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-18 03:19:43.587943 | orchestrator | Wednesday 18 February 2026 03:19:31 +0000 (0:00:00.727) 0:04:30.440 **** 2026-02-18 03:19:43.587954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 03:19:43.587997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 03:19:43.588012 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:43.588023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 03:19:43.588034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 03:19:43.588045 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:43.588055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 03:19:43.588066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 03:19:43.588077 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:43.588088 | orchestrator | 2026-02-18 03:19:43.588099 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-18 03:19:43.588110 | orchestrator | Wednesday 18 February 2026 03:19:32 +0000 (0:00:01.042) 0:04:31.483 **** 2026-02-18 03:19:43.588121 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:19:43.588131 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:19:43.588142 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:19:43.588152 | orchestrator | 2026-02-18 03:19:43.588163 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-18 03:19:43.588174 | orchestrator | Wednesday 18 February 2026 03:19:34 +0000 (0:00:01.782) 0:04:33.265 **** 2026-02-18 03:19:43.588184 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:19:43.588195 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:19:43.588206 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:19:43.588217 | orchestrator | 2026-02-18 03:19:43.588228 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-18 03:19:43.588239 | orchestrator | Wednesday 18 February 2026 03:19:36 +0000 (0:00:02.138) 0:04:35.403 **** 2026-02-18 03:19:43.588250 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:19:43.588261 | orchestrator | 2026-02-18 03:19:43.588273 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-18 03:19:43.588285 | orchestrator | Wednesday 18 February 2026 03:19:38 +0000 (0:00:01.505) 0:04:36.908 **** 2026-02-18 03:19:43.588316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:19:43.588351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:19:43.588374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:19:43.588389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:19:43.588410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:19:43.588439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:19:45.766305 | orchestrator | 2026-02-18 03:19:45.766418 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-18 03:19:45.766435 | orchestrator | Wednesday 18 February 2026 03:19:43 +0000 (0:00:05.499) 0:04:42.408 **** 2026-02-18 03:19:45.766451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:19:45.766470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:19:45.766483 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:45.766520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:19:45.766533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:19:45.766592 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:45.766606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:19:45.766618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:19:45.766630 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:45.766667 | orchestrator | 2026-02-18 03:19:45.766679 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-18 03:19:45.766691 | orchestrator | Wednesday 18 February 2026 03:19:44 +0000 (0:00:01.257) 0:04:43.666 **** 2026-02-18 03:19:45.766703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-18 03:19:45.766716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-18 03:19:45.766730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-18 03:19:45.766752 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:45.766769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-18 03:19:45.766781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-18 03:19:45.766792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-18 03:19:45.766803 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:45.766814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-18 03:19:45.766825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-18 03:19:45.766850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-18 03:19:52.385600 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:52.385819 | orchestrator | 2026-02-18 03:19:52.385838 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-18 03:19:52.385852 | orchestrator | Wednesday 18 February 2026 03:19:45 +0000 (0:00:00.919) 0:04:44.585 **** 2026-02-18 03:19:52.385863 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:52.385874 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:52.385885 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:52.385895 | orchestrator | 2026-02-18 03:19:52.385906 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-18 03:19:52.385920 | orchestrator | Wednesday 18 February 2026 03:19:46 +0000 (0:00:00.450) 0:04:45.035 **** 2026-02-18 03:19:52.385938 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:52.385957 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:52.385976 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:52.385992 | orchestrator | 2026-02-18 03:19:52.386009 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-18 03:19:52.386104 | orchestrator | Wednesday 18 February 2026 03:19:48 +0000 (0:00:01.835) 0:04:46.870 **** 2026-02-18 03:19:52.386123 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:19:52.386142 | orchestrator | 2026-02-18 03:19:52.386162 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-18 03:19:52.386181 | orchestrator | Wednesday 18 February 2026 03:19:49 +0000 (0:00:01.777) 0:04:48.647 **** 2026-02-18 03:19:52.386204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-18 03:19:52.386266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 03:19:52.386297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:52.386311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:52.386325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 03:19:52.386364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-18 03:19:52.386386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 03:19:52.386404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:52.386435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:52.386454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 03:19:52.386481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-18 03:19:52.386501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 03:19:52.386533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:53.981783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:53.981894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 03:19:53.981939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-18 03:19:53.981972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-18 03:19:53.981985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:53.981998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:53.982104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 03:19:53.982122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-18 03:19:53.982159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-18 03:19:53.982171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:53.982183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:53.982195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 03:19:53.982218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-18 03:19:54.771147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-18 03:19:54.771246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.771276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.771287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 03:19:54.771297 | orchestrator | 2026-02-18 03:19:54.771309 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-18 03:19:54.771319 | orchestrator | Wednesday 18 February 2026 03:19:54 +0000 (0:00:04.334) 0:04:52.982 **** 2026-02-18 03:19:54.771330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-18 03:19:54.771340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 03:19:54.771384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.771395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.771406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 03:19:54.771422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-18 03:19:54.771433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-18 03:19:54.771448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-18 03:19:54.883213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.883320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 03:19:54.883353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.883367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.883379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 03:19:54.883392 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:54.883406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.883419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 03:19:54.883507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-18 03:19:54.883523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-18 03:19:54.883542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-18 03:19:54.883554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 03:19:54.883566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.883584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:54.883605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:56.810141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:56.810265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 03:19:56.810316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 03:19:56.810338 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:56.810362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-18 03:19:56.810387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-18 03:19:56.810438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:56.810484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 03:19:56.810498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 03:19:56.810510 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:19:56.810521 | orchestrator | 2026-02-18 03:19:56.810533 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-18 03:19:56.810546 | orchestrator | Wednesday 18 February 2026 03:19:55 +0000 (0:00:00.880) 0:04:53.862 **** 2026-02-18 03:19:56.810576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-18 03:19:56.810593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-18 03:19:56.810609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-18 03:19:56.810627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-18 03:19:56.810678 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:19:56.810697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-18 03:19:56.810719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-18 03:19:56.810736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-18 03:19:56.810755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-18 03:19:56.810772 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:19:56.810789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-18 03:19:56.810806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-18 03:19:56.810824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-18 03:19:56.810853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-18 03:20:04.882434 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:04.882584 | orchestrator | 2026-02-18 03:20:04.882611 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-18 03:20:04.882628 | orchestrator | Wednesday 18 February 2026 03:19:56 +0000 (0:00:01.767) 0:04:55.629 **** 2026-02-18 03:20:04.882640 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:04.882741 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:04.882753 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:04.882763 | orchestrator | 2026-02-18 03:20:04.882775 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-18 03:20:04.882786 | orchestrator | Wednesday 18 February 2026 03:19:57 +0000 (0:00:00.464) 0:04:56.094 **** 2026-02-18 03:20:04.882797 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:04.882808 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:04.882819 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:04.882829 | orchestrator | 2026-02-18 03:20:04.882840 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-18 03:20:04.882851 | orchestrator | Wednesday 18 February 2026 03:19:58 +0000 (0:00:01.490) 0:04:57.584 **** 2026-02-18 03:20:04.882862 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:20:04.882873 | orchestrator | 2026-02-18 03:20:04.882884 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-18 03:20:04.882894 | orchestrator | Wednesday 18 February 2026 03:20:00 +0000 (0:00:01.848) 0:04:59.432 **** 2026-02-18 03:20:04.882910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:20:04.882957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:20:04.883016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:20:04.883030 | orchestrator | 2026-02-18 03:20:04.883044 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-18 03:20:04.883077 | orchestrator | Wednesday 18 February 2026 03:20:02 +0000 (0:00:02.245) 0:05:01.678 **** 2026-02-18 03:20:04.883099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 03:20:04.883124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 03:20:04.883138 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:04.883151 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:04.883164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 03:20:04.883178 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:04.883191 | orchestrator | 2026-02-18 03:20:04.883204 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-18 03:20:04.883217 | orchestrator | Wednesday 18 February 2026 03:20:03 +0000 (0:00:00.436) 0:05:02.114 **** 2026-02-18 03:20:04.883231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-18 03:20:04.883245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-18 03:20:04.883258 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:04.883271 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:04.883284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-18 03:20:04.883297 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:04.883311 | orchestrator | 2026-02-18 03:20:04.883323 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-18 03:20:04.883334 | orchestrator | Wednesday 18 February 2026 03:20:03 +0000 (0:00:00.675) 0:05:02.790 **** 2026-02-18 03:20:04.883351 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:15.847772 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:15.847871 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:15.847882 | orchestrator | 2026-02-18 03:20:15.847890 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-18 03:20:15.847899 | orchestrator | Wednesday 18 February 2026 03:20:04 +0000 (0:00:00.920) 0:05:03.711 **** 2026-02-18 03:20:15.847906 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:15.847934 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:15.847941 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:15.847948 | orchestrator | 2026-02-18 03:20:15.847955 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-18 03:20:15.847961 | orchestrator | Wednesday 18 February 2026 03:20:06 +0000 (0:00:01.445) 0:05:05.156 **** 2026-02-18 03:20:15.847968 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:20:15.847975 | orchestrator | 2026-02-18 03:20:15.847982 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-18 03:20:15.847989 | orchestrator | Wednesday 18 February 2026 03:20:07 +0000 (0:00:01.552) 0:05:06.709 **** 2026-02-18 03:20:15.848011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 03:20:15.848023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 03:20:15.848031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 03:20:15.848054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 03:20:15.848072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 03:20:15.848080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 03:20:15.848086 | orchestrator | 2026-02-18 03:20:15.848093 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-18 03:20:15.848101 | orchestrator | Wednesday 18 February 2026 03:20:14 +0000 (0:00:06.838) 0:05:13.547 **** 2026-02-18 03:20:15.848108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 03:20:15.848121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 03:20:21.867207 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:21.867321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 03:20:21.867337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 03:20:21.867348 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:21.867357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 03:20:21.867367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 03:20:21.867483 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:21.867496 | orchestrator | 2026-02-18 03:20:21.867506 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-18 03:20:21.867543 | orchestrator | Wednesday 18 February 2026 03:20:15 +0000 (0:00:01.127) 0:05:14.675 **** 2026-02-18 03:20:21.867571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867629 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:21.867638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867721 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:21.867730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-18 03:20:21.867768 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:21.867798 | orchestrator | 2026-02-18 03:20:21.867809 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-18 03:20:21.867819 | orchestrator | Wednesday 18 February 2026 03:20:16 +0000 (0:00:01.004) 0:05:15.679 **** 2026-02-18 03:20:21.867829 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:20:21.867839 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:20:21.867849 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:20:21.867859 | orchestrator | 2026-02-18 03:20:21.867869 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-18 03:20:21.867878 | orchestrator | Wednesday 18 February 2026 03:20:18 +0000 (0:00:01.374) 0:05:17.053 **** 2026-02-18 03:20:21.867888 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:20:21.867898 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:20:21.867907 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:20:21.867929 | orchestrator | 2026-02-18 03:20:21.867940 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-18 03:20:21.867958 | orchestrator | Wednesday 18 February 2026 03:20:20 +0000 (0:00:02.282) 0:05:19.335 **** 2026-02-18 03:20:21.867969 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:21.867979 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:21.867989 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:21.867997 | orchestrator | 2026-02-18 03:20:21.868006 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-18 03:20:21.868014 | orchestrator | Wednesday 18 February 2026 03:20:21 +0000 (0:00:00.696) 0:05:20.032 **** 2026-02-18 03:20:21.868023 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:21.868032 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:20:21.868040 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:20:21.868049 | orchestrator | 2026-02-18 03:20:21.868057 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-18 03:20:21.868066 | orchestrator | Wednesday 18 February 2026 03:20:21 +0000 (0:00:00.340) 0:05:20.372 **** 2026-02-18 03:20:21.868075 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:20:21.868089 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.397268 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.397403 | orchestrator | 2026-02-18 03:21:08.397432 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-18 03:21:08.397455 | orchestrator | Wednesday 18 February 2026 03:20:21 +0000 (0:00:00.324) 0:05:20.697 **** 2026-02-18 03:21:08.397477 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.397495 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.397507 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.397518 | orchestrator | 2026-02-18 03:21:08.397529 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-18 03:21:08.397541 | orchestrator | Wednesday 18 February 2026 03:20:22 +0000 (0:00:00.332) 0:05:21.030 **** 2026-02-18 03:21:08.397552 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.397563 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.397574 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.397585 | orchestrator | 2026-02-18 03:21:08.397597 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-18 03:21:08.397624 | orchestrator | Wednesday 18 February 2026 03:20:22 +0000 (0:00:00.650) 0:05:21.681 **** 2026-02-18 03:21:08.397637 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.397648 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.397660 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.397671 | orchestrator | 2026-02-18 03:21:08.397771 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-18 03:21:08.397783 | orchestrator | Wednesday 18 February 2026 03:20:23 +0000 (0:00:00.564) 0:05:22.246 **** 2026-02-18 03:21:08.397794 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.397807 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.397820 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.397833 | orchestrator | 2026-02-18 03:21:08.397846 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-18 03:21:08.397893 | orchestrator | Wednesday 18 February 2026 03:20:24 +0000 (0:00:00.677) 0:05:22.923 **** 2026-02-18 03:21:08.397906 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.397919 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.397932 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.397945 | orchestrator | 2026-02-18 03:21:08.397957 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-18 03:21:08.397971 | orchestrator | Wednesday 18 February 2026 03:20:24 +0000 (0:00:00.739) 0:05:23.663 **** 2026-02-18 03:21:08.397984 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.397998 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.398010 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.398163 | orchestrator | 2026-02-18 03:21:08.398177 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-18 03:21:08.398190 | orchestrator | Wednesday 18 February 2026 03:20:25 +0000 (0:00:00.880) 0:05:24.543 **** 2026-02-18 03:21:08.398203 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.398214 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.398225 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.398235 | orchestrator | 2026-02-18 03:21:08.398246 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-18 03:21:08.398257 | orchestrator | Wednesday 18 February 2026 03:20:26 +0000 (0:00:00.907) 0:05:25.451 **** 2026-02-18 03:21:08.398268 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.398278 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.398289 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.398300 | orchestrator | 2026-02-18 03:21:08.398311 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-18 03:21:08.398321 | orchestrator | Wednesday 18 February 2026 03:20:27 +0000 (0:00:00.827) 0:05:26.279 **** 2026-02-18 03:21:08.398332 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:21:08.398343 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:21:08.398354 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:21:08.398365 | orchestrator | 2026-02-18 03:21:08.398375 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-18 03:21:08.398386 | orchestrator | Wednesday 18 February 2026 03:20:32 +0000 (0:00:05.006) 0:05:31.286 **** 2026-02-18 03:21:08.398397 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.398408 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.398418 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.398429 | orchestrator | 2026-02-18 03:21:08.398440 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-18 03:21:08.398450 | orchestrator | Wednesday 18 February 2026 03:20:36 +0000 (0:00:04.062) 0:05:35.348 **** 2026-02-18 03:21:08.398461 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:21:08.398472 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:21:08.398482 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:21:08.398493 | orchestrator | 2026-02-18 03:21:08.398504 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-18 03:21:08.398515 | orchestrator | Wednesday 18 February 2026 03:20:52 +0000 (0:00:15.829) 0:05:51.178 **** 2026-02-18 03:21:08.398526 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.398537 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.398547 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.398558 | orchestrator | 2026-02-18 03:21:08.398569 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-18 03:21:08.398579 | orchestrator | Wednesday 18 February 2026 03:20:53 +0000 (0:00:00.763) 0:05:51.942 **** 2026-02-18 03:21:08.398590 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:21:08.398601 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:21:08.398611 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:21:08.398622 | orchestrator | 2026-02-18 03:21:08.398633 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-18 03:21:08.398644 | orchestrator | Wednesday 18 February 2026 03:21:02 +0000 (0:00:09.672) 0:06:01.614 **** 2026-02-18 03:21:08.398669 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.398714 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.398731 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.398742 | orchestrator | 2026-02-18 03:21:08.398753 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-18 03:21:08.398763 | orchestrator | Wednesday 18 February 2026 03:21:03 +0000 (0:00:00.726) 0:06:02.341 **** 2026-02-18 03:21:08.398774 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.398785 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.398796 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.398807 | orchestrator | 2026-02-18 03:21:08.398840 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-18 03:21:08.398852 | orchestrator | Wednesday 18 February 2026 03:21:03 +0000 (0:00:00.368) 0:06:02.710 **** 2026-02-18 03:21:08.398863 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.398873 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.398884 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.398895 | orchestrator | 2026-02-18 03:21:08.398905 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-18 03:21:08.398916 | orchestrator | Wednesday 18 February 2026 03:21:04 +0000 (0:00:00.384) 0:06:03.095 **** 2026-02-18 03:21:08.398927 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.398938 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.398949 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.398960 | orchestrator | 2026-02-18 03:21:08.398970 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-18 03:21:08.398981 | orchestrator | Wednesday 18 February 2026 03:21:04 +0000 (0:00:00.373) 0:06:03.468 **** 2026-02-18 03:21:08.398992 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.399012 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.399023 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.399033 | orchestrator | 2026-02-18 03:21:08.399045 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-18 03:21:08.399056 | orchestrator | Wednesday 18 February 2026 03:21:05 +0000 (0:00:00.728) 0:06:04.197 **** 2026-02-18 03:21:08.399067 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:08.399078 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:08.399088 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:08.399099 | orchestrator | 2026-02-18 03:21:08.399110 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-18 03:21:08.399121 | orchestrator | Wednesday 18 February 2026 03:21:05 +0000 (0:00:00.362) 0:06:04.559 **** 2026-02-18 03:21:08.399131 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.399146 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.399164 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.399182 | orchestrator | 2026-02-18 03:21:08.399199 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-18 03:21:08.399216 | orchestrator | Wednesday 18 February 2026 03:21:06 +0000 (0:00:00.940) 0:06:05.499 **** 2026-02-18 03:21:08.399234 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:08.399251 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:08.399266 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:08.399282 | orchestrator | 2026-02-18 03:21:08.399299 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:21:08.399319 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-18 03:21:08.399339 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-18 03:21:08.399358 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-18 03:21:08.399378 | orchestrator | 2026-02-18 03:21:08.399412 | orchestrator | 2026-02-18 03:21:08.399431 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:21:08.399451 | orchestrator | Wednesday 18 February 2026 03:21:07 +0000 (0:00:00.823) 0:06:06.323 **** 2026-02-18 03:21:08.399469 | orchestrator | =============================================================================== 2026-02-18 03:21:08.399486 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.83s 2026-02-18 03:21:08.399497 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.67s 2026-02-18 03:21:08.399508 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.84s 2026-02-18 03:21:08.399518 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.50s 2026-02-18 03:21:08.399529 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.01s 2026-02-18 03:21:08.399540 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.53s 2026-02-18 03:21:08.399550 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.40s 2026-02-18 03:21:08.399561 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.39s 2026-02-18 03:21:08.399571 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.33s 2026-02-18 03:21:08.399582 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.26s 2026-02-18 03:21:08.399593 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.06s 2026-02-18 03:21:08.399603 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 4.06s 2026-02-18 03:21:08.399614 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.03s 2026-02-18 03:21:08.399625 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.69s 2026-02-18 03:21:08.399636 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.62s 2026-02-18 03:21:08.399646 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.54s 2026-02-18 03:21:08.399657 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.48s 2026-02-18 03:21:08.399668 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.47s 2026-02-18 03:21:08.399706 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.42s 2026-02-18 03:21:08.399718 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.31s 2026-02-18 03:21:11.067475 | orchestrator | 2026-02-18 03:21:11 | INFO  | Task 688bd62a-198e-4832-8442-edaed33f2307 (opensearch) was prepared for execution. 2026-02-18 03:21:11.067563 | orchestrator | 2026-02-18 03:21:11 | INFO  | It takes a moment until task 688bd62a-198e-4832-8442-edaed33f2307 (opensearch) has been started and output is visible here. 2026-02-18 03:21:22.081404 | orchestrator | 2026-02-18 03:21:22.081483 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:21:22.081490 | orchestrator | 2026-02-18 03:21:22.081494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:21:22.081498 | orchestrator | Wednesday 18 February 2026 03:21:15 +0000 (0:00:00.270) 0:00:00.270 **** 2026-02-18 03:21:22.081503 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:21:22.081508 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:21:22.081512 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:21:22.081516 | orchestrator | 2026-02-18 03:21:22.081520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:21:22.081524 | orchestrator | Wednesday 18 February 2026 03:21:15 +0000 (0:00:00.319) 0:00:00.589 **** 2026-02-18 03:21:22.081539 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-18 03:21:22.081543 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-18 03:21:22.081547 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-18 03:21:22.081551 | orchestrator | 2026-02-18 03:21:22.081554 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-18 03:21:22.081574 | orchestrator | 2026-02-18 03:21:22.081578 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 03:21:22.081581 | orchestrator | Wednesday 18 February 2026 03:21:16 +0000 (0:00:00.459) 0:00:01.049 **** 2026-02-18 03:21:22.081588 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:21:22.081599 | orchestrator | 2026-02-18 03:21:22.081605 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-18 03:21:22.081612 | orchestrator | Wednesday 18 February 2026 03:21:16 +0000 (0:00:00.496) 0:00:01.546 **** 2026-02-18 03:21:22.081618 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 03:21:22.081624 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 03:21:22.081631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 03:21:22.081637 | orchestrator | 2026-02-18 03:21:22.081644 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-18 03:21:22.081650 | orchestrator | Wednesday 18 February 2026 03:21:17 +0000 (0:00:00.669) 0:00:02.215 **** 2026-02-18 03:21:22.081659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:22.081670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:22.081749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:22.081766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:22.081781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:22.081788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:22.081794 | orchestrator | 2026-02-18 03:21:22.081800 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 03:21:22.081807 | orchestrator | Wednesday 18 February 2026 03:21:19 +0000 (0:00:01.693) 0:00:03.909 **** 2026-02-18 03:21:22.081813 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:21:22.081820 | orchestrator | 2026-02-18 03:21:22.081826 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-18 03:21:22.081831 | orchestrator | Wednesday 18 February 2026 03:21:19 +0000 (0:00:00.550) 0:00:04.460 **** 2026-02-18 03:21:22.081847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:22.887027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:22.887149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:22.887165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:22.887178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:22.887239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:22.887252 | orchestrator | 2026-02-18 03:21:22.887262 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-18 03:21:22.887272 | orchestrator | Wednesday 18 February 2026 03:21:22 +0000 (0:00:02.337) 0:00:06.797 **** 2026-02-18 03:21:22.887282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:21:22.887292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:21:22.887302 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:22.887312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:21:22.887342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:21:23.959933 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:23.960005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:21:23.960017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:21:23.960024 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:23.960030 | orchestrator | 2026-02-18 03:21:23.960036 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-18 03:21:23.960042 | orchestrator | Wednesday 18 February 2026 03:21:22 +0000 (0:00:00.806) 0:00:07.604 **** 2026-02-18 03:21:23.960067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:21:23.960085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:21:23.960102 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:21:23.960108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:21:23.960114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:21:23.960119 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:21:23.960131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-18 03:21:23.960140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-18 03:21:23.960146 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:21:23.960151 | orchestrator | 2026-02-18 03:21:23.960156 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-18 03:21:23.960165 | orchestrator | Wednesday 18 February 2026 03:21:23 +0000 (0:00:01.072) 0:00:08.677 **** 2026-02-18 03:21:32.102400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:32.102519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:32.102539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:32.102600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:32.102639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:32.102656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:21:32.102681 | orchestrator | 2026-02-18 03:21:32.102752 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-18 03:21:32.102767 | orchestrator | Wednesday 18 February 2026 03:21:26 +0000 (0:00:02.283) 0:00:10.960 **** 2026-02-18 03:21:32.102781 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:21:32.102795 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:21:32.102809 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:21:32.102822 | orchestrator | 2026-02-18 03:21:32.102855 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-18 03:21:32.102868 | orchestrator | Wednesday 18 February 2026 03:21:28 +0000 (0:00:02.375) 0:00:13.335 **** 2026-02-18 03:21:32.102882 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:21:32.102895 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:21:32.102908 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:21:32.102922 | orchestrator | 2026-02-18 03:21:32.102935 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-18 03:21:32.102948 | orchestrator | Wednesday 18 February 2026 03:21:30 +0000 (0:00:01.767) 0:00:15.103 **** 2026-02-18 03:21:32.102962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:32.102984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:21:32.103010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-18 03:24:13.472358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:24:13.472583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:24:13.472634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-18 03:24:13.472656 | orchestrator | 2026-02-18 03:24:13.472676 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 03:24:13.472696 | orchestrator | Wednesday 18 February 2026 03:21:32 +0000 (0:00:01.717) 0:00:16.820 **** 2026-02-18 03:24:13.472715 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:24:13.472732 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:24:13.472749 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:24:13.472767 | orchestrator | 2026-02-18 03:24:13.472865 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-18 03:24:13.472887 | orchestrator | Wednesday 18 February 2026 03:21:32 +0000 (0:00:00.294) 0:00:17.114 **** 2026-02-18 03:24:13.472903 | orchestrator | 2026-02-18 03:24:13.472920 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-18 03:24:13.472937 | orchestrator | Wednesday 18 February 2026 03:21:32 +0000 (0:00:00.068) 0:00:17.182 **** 2026-02-18 03:24:13.472953 | orchestrator | 2026-02-18 03:24:13.472971 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-18 03:24:13.473004 | orchestrator | Wednesday 18 February 2026 03:21:32 +0000 (0:00:00.070) 0:00:17.253 **** 2026-02-18 03:24:13.473022 | orchestrator | 2026-02-18 03:24:13.473039 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-18 03:24:13.473082 | orchestrator | Wednesday 18 February 2026 03:21:32 +0000 (0:00:00.064) 0:00:17.318 **** 2026-02-18 03:24:13.473101 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:24:13.473116 | orchestrator | 2026-02-18 03:24:13.473133 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-18 03:24:13.473151 | orchestrator | Wednesday 18 February 2026 03:21:32 +0000 (0:00:00.207) 0:00:17.525 **** 2026-02-18 03:24:13.473169 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:24:13.473186 | orchestrator | 2026-02-18 03:24:13.473203 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-18 03:24:13.473219 | orchestrator | Wednesday 18 February 2026 03:21:33 +0000 (0:00:00.663) 0:00:18.188 **** 2026-02-18 03:24:13.473236 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:13.473252 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:24:13.473268 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:24:13.473285 | orchestrator | 2026-02-18 03:24:13.473301 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-18 03:24:13.473318 | orchestrator | Wednesday 18 February 2026 03:22:40 +0000 (0:01:06.892) 0:01:25.081 **** 2026-02-18 03:24:13.473334 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:13.473350 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:24:13.473366 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:24:13.473383 | orchestrator | 2026-02-18 03:24:13.473400 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 03:24:13.473416 | orchestrator | Wednesday 18 February 2026 03:24:02 +0000 (0:01:22.026) 0:02:47.108 **** 2026-02-18 03:24:13.473434 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:24:13.473451 | orchestrator | 2026-02-18 03:24:13.473467 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-18 03:24:13.473484 | orchestrator | Wednesday 18 February 2026 03:24:02 +0000 (0:00:00.506) 0:02:47.614 **** 2026-02-18 03:24:13.473500 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:24:13.473517 | orchestrator | 2026-02-18 03:24:13.473533 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-18 03:24:13.473550 | orchestrator | Wednesday 18 February 2026 03:24:05 +0000 (0:00:02.596) 0:02:50.211 **** 2026-02-18 03:24:13.473567 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:24:13.473584 | orchestrator | 2026-02-18 03:24:13.473602 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-18 03:24:13.473620 | orchestrator | Wednesday 18 February 2026 03:24:07 +0000 (0:00:02.299) 0:02:52.510 **** 2026-02-18 03:24:13.473636 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:13.473652 | orchestrator | 2026-02-18 03:24:13.473671 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-18 03:24:13.474001 | orchestrator | Wednesday 18 February 2026 03:24:10 +0000 (0:00:02.752) 0:02:55.262 **** 2026-02-18 03:24:13.474109 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:13.474125 | orchestrator | 2026-02-18 03:24:13.474140 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:24:13.474156 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 03:24:13.474172 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 03:24:13.474202 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 03:24:13.474217 | orchestrator | 2026-02-18 03:24:13.474233 | orchestrator | 2026-02-18 03:24:13.474260 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:24:13.474277 | orchestrator | Wednesday 18 February 2026 03:24:13 +0000 (0:00:02.912) 0:02:58.175 **** 2026-02-18 03:24:13.474292 | orchestrator | =============================================================================== 2026-02-18 03:24:13.474307 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.03s 2026-02-18 03:24:13.474322 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.89s 2026-02-18 03:24:13.474337 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.91s 2026-02-18 03:24:13.474352 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.75s 2026-02-18 03:24:13.474367 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.60s 2026-02-18 03:24:13.474382 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.38s 2026-02-18 03:24:13.474396 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.34s 2026-02-18 03:24:13.474411 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.30s 2026-02-18 03:24:13.474425 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.28s 2026-02-18 03:24:13.474440 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.77s 2026-02-18 03:24:13.474455 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.72s 2026-02-18 03:24:13.474469 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.69s 2026-02-18 03:24:13.474484 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.07s 2026-02-18 03:24:13.474499 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.81s 2026-02-18 03:24:13.474514 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-02-18 03:24:13.474529 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.66s 2026-02-18 03:24:13.474563 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-02-18 03:24:13.861946 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-02-18 03:24:13.862053 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-02-18 03:24:13.862061 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-02-18 03:24:16.430629 | orchestrator | 2026-02-18 03:24:16 | INFO  | Task 674157ef-1bbf-42ac-9995-0c7de70fba0d (memcached) was prepared for execution. 2026-02-18 03:24:16.430765 | orchestrator | 2026-02-18 03:24:16 | INFO  | It takes a moment until task 674157ef-1bbf-42ac-9995-0c7de70fba0d (memcached) has been started and output is visible here. 2026-02-18 03:24:28.821633 | orchestrator | 2026-02-18 03:24:28.821735 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:24:28.821748 | orchestrator | 2026-02-18 03:24:28.821759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:24:28.821770 | orchestrator | Wednesday 18 February 2026 03:24:20 +0000 (0:00:00.272) 0:00:00.272 **** 2026-02-18 03:24:28.821780 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:24:28.821814 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:24:28.821824 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:24:28.821833 | orchestrator | 2026-02-18 03:24:28.821843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:24:28.821852 | orchestrator | Wednesday 18 February 2026 03:24:21 +0000 (0:00:00.323) 0:00:00.596 **** 2026-02-18 03:24:28.821863 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-18 03:24:28.821873 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-18 03:24:28.821882 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-18 03:24:28.821891 | orchestrator | 2026-02-18 03:24:28.821900 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-18 03:24:28.821938 | orchestrator | 2026-02-18 03:24:28.821948 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-18 03:24:28.821958 | orchestrator | Wednesday 18 February 2026 03:24:21 +0000 (0:00:00.486) 0:00:01.083 **** 2026-02-18 03:24:28.821968 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:24:28.821979 | orchestrator | 2026-02-18 03:24:28.821989 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-18 03:24:28.821998 | orchestrator | Wednesday 18 February 2026 03:24:22 +0000 (0:00:00.514) 0:00:01.598 **** 2026-02-18 03:24:28.822007 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-18 03:24:28.822058 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-18 03:24:28.822069 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-18 03:24:28.822080 | orchestrator | 2026-02-18 03:24:28.822089 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-18 03:24:28.822099 | orchestrator | Wednesday 18 February 2026 03:24:22 +0000 (0:00:00.646) 0:00:02.244 **** 2026-02-18 03:24:28.822109 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-18 03:24:28.822119 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-18 03:24:28.822129 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-18 03:24:28.822139 | orchestrator | 2026-02-18 03:24:28.822149 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-18 03:24:28.822159 | orchestrator | Wednesday 18 February 2026 03:24:24 +0000 (0:00:01.844) 0:00:04.089 **** 2026-02-18 03:24:28.822208 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:24:28.822221 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:28.822235 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:24:28.822245 | orchestrator | 2026-02-18 03:24:28.822257 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-18 03:24:28.822269 | orchestrator | Wednesday 18 February 2026 03:24:26 +0000 (0:00:01.517) 0:00:05.606 **** 2026-02-18 03:24:28.822282 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:28.822294 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:24:28.822305 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:24:28.822317 | orchestrator | 2026-02-18 03:24:28.822328 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:24:28.822341 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:24:28.822354 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:24:28.822366 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:24:28.822376 | orchestrator | 2026-02-18 03:24:28.822387 | orchestrator | 2026-02-18 03:24:28.822398 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:24:28.822409 | orchestrator | Wednesday 18 February 2026 03:24:28 +0000 (0:00:02.161) 0:00:07.767 **** 2026-02-18 03:24:28.822422 | orchestrator | =============================================================================== 2026-02-18 03:24:28.822433 | orchestrator | memcached : Restart memcached container --------------------------------- 2.16s 2026-02-18 03:24:28.822445 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.84s 2026-02-18 03:24:28.822457 | orchestrator | memcached : Check memcached container ----------------------------------- 1.52s 2026-02-18 03:24:28.822469 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.65s 2026-02-18 03:24:28.822480 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-02-18 03:24:28.822492 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-02-18 03:24:28.822513 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-18 03:24:31.252348 | orchestrator | 2026-02-18 03:24:31 | INFO  | Task ed9751c8-f6e0-42e1-83b9-391bdd4be536 (redis) was prepared for execution. 2026-02-18 03:24:31.252447 | orchestrator | 2026-02-18 03:24:31 | INFO  | It takes a moment until task ed9751c8-f6e0-42e1-83b9-391bdd4be536 (redis) has been started and output is visible here. 2026-02-18 03:24:40.475079 | orchestrator | 2026-02-18 03:24:40.475167 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:24:40.475179 | orchestrator | 2026-02-18 03:24:40.475187 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:24:40.475196 | orchestrator | Wednesday 18 February 2026 03:24:35 +0000 (0:00:00.276) 0:00:00.276 **** 2026-02-18 03:24:40.475204 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:24:40.475213 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:24:40.475221 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:24:40.475229 | orchestrator | 2026-02-18 03:24:40.475237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:24:40.475245 | orchestrator | Wednesday 18 February 2026 03:24:35 +0000 (0:00:00.315) 0:00:00.592 **** 2026-02-18 03:24:40.475253 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-18 03:24:40.475261 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-18 03:24:40.475269 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-18 03:24:40.475276 | orchestrator | 2026-02-18 03:24:40.475284 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-18 03:24:40.475292 | orchestrator | 2026-02-18 03:24:40.475300 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-18 03:24:40.475308 | orchestrator | Wednesday 18 February 2026 03:24:36 +0000 (0:00:00.423) 0:00:01.015 **** 2026-02-18 03:24:40.475316 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:24:40.475324 | orchestrator | 2026-02-18 03:24:40.475332 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-18 03:24:40.475340 | orchestrator | Wednesday 18 February 2026 03:24:36 +0000 (0:00:00.498) 0:00:01.514 **** 2026-02-18 03:24:40.475351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475450 | orchestrator | 2026-02-18 03:24:40.475458 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-18 03:24:40.475467 | orchestrator | Wednesday 18 February 2026 03:24:37 +0000 (0:00:01.097) 0:00:02.612 **** 2026-02-18 03:24:40.475475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:40.475655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724645 | orchestrator | 2026-02-18 03:24:44.724663 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-18 03:24:44.724677 | orchestrator | Wednesday 18 February 2026 03:24:40 +0000 (0:00:02.563) 0:00:05.176 **** 2026-02-18 03:24:44.724690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724872 | orchestrator | 2026-02-18 03:24:44.724883 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-18 03:24:44.724894 | orchestrator | Wednesday 18 February 2026 03:24:42 +0000 (0:00:02.527) 0:00:07.703 **** 2026-02-18 03:24:44.724906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:44.724987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 03:24:56.326079 | orchestrator | 2026-02-18 03:24:56.326179 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-18 03:24:56.326194 | orchestrator | Wednesday 18 February 2026 03:24:44 +0000 (0:00:01.480) 0:00:09.183 **** 2026-02-18 03:24:56.326204 | orchestrator | 2026-02-18 03:24:56.326212 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-18 03:24:56.326221 | orchestrator | Wednesday 18 February 2026 03:24:44 +0000 (0:00:00.075) 0:00:09.259 **** 2026-02-18 03:24:56.326230 | orchestrator | 2026-02-18 03:24:56.326239 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-18 03:24:56.326247 | orchestrator | Wednesday 18 February 2026 03:24:44 +0000 (0:00:00.093) 0:00:09.352 **** 2026-02-18 03:24:56.326256 | orchestrator | 2026-02-18 03:24:56.326264 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-18 03:24:56.326273 | orchestrator | Wednesday 18 February 2026 03:24:44 +0000 (0:00:00.071) 0:00:09.424 **** 2026-02-18 03:24:56.326282 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:56.326291 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:24:56.326300 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:24:56.326308 | orchestrator | 2026-02-18 03:24:56.326317 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-18 03:24:56.326326 | orchestrator | Wednesday 18 February 2026 03:24:47 +0000 (0:00:03.029) 0:00:12.454 **** 2026-02-18 03:24:56.326358 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:24:56.326367 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:24:56.326376 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:24:56.326385 | orchestrator | 2026-02-18 03:24:56.326394 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:24:56.326403 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:24:56.326421 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:24:56.326452 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:24:56.326468 | orchestrator | 2026-02-18 03:24:56.326483 | orchestrator | 2026-02-18 03:24:56.326497 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:24:56.326513 | orchestrator | Wednesday 18 February 2026 03:24:55 +0000 (0:00:08.221) 0:00:20.676 **** 2026-02-18 03:24:56.326527 | orchestrator | =============================================================================== 2026-02-18 03:24:56.326542 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.22s 2026-02-18 03:24:56.326557 | orchestrator | redis : Restart redis container ----------------------------------------- 3.03s 2026-02-18 03:24:56.326573 | orchestrator | redis : Copying over default config.json files -------------------------- 2.56s 2026-02-18 03:24:56.326588 | orchestrator | redis : Copying over redis config files --------------------------------- 2.53s 2026-02-18 03:24:56.326605 | orchestrator | redis : Check redis containers ------------------------------------------ 1.48s 2026-02-18 03:24:56.326620 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.10s 2026-02-18 03:24:56.326638 | orchestrator | redis : include_tasks --------------------------------------------------- 0.50s 2026-02-18 03:24:56.326655 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-02-18 03:24:56.326672 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-18 03:24:56.326687 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-02-18 03:24:58.745917 | orchestrator | 2026-02-18 03:24:58 | INFO  | Task a6540b2b-75cd-42a2-a1cf-324f2e3694ac (mariadb) was prepared for execution. 2026-02-18 03:24:58.746105 | orchestrator | 2026-02-18 03:24:58 | INFO  | It takes a moment until task a6540b2b-75cd-42a2-a1cf-324f2e3694ac (mariadb) has been started and output is visible here. 2026-02-18 03:25:12.938697 | orchestrator | 2026-02-18 03:25:12.938835 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:25:12.938851 | orchestrator | 2026-02-18 03:25:12.938861 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:25:12.938872 | orchestrator | Wednesday 18 February 2026 03:25:03 +0000 (0:00:00.179) 0:00:00.179 **** 2026-02-18 03:25:12.938882 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:25:12.938892 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:25:12.938902 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:25:12.938911 | orchestrator | 2026-02-18 03:25:12.938921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:25:12.938931 | orchestrator | Wednesday 18 February 2026 03:25:03 +0000 (0:00:00.311) 0:00:00.490 **** 2026-02-18 03:25:12.938941 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-18 03:25:12.938950 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-18 03:25:12.938960 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-18 03:25:12.938969 | orchestrator | 2026-02-18 03:25:12.938978 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-18 03:25:12.938987 | orchestrator | 2026-02-18 03:25:12.938997 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-18 03:25:12.939030 | orchestrator | Wednesday 18 February 2026 03:25:03 +0000 (0:00:00.575) 0:00:01.065 **** 2026-02-18 03:25:12.939041 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 03:25:12.939050 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 03:25:12.939060 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 03:25:12.939069 | orchestrator | 2026-02-18 03:25:12.939078 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 03:25:12.939088 | orchestrator | Wednesday 18 February 2026 03:25:04 +0000 (0:00:00.370) 0:00:01.436 **** 2026-02-18 03:25:12.939098 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:25:12.939108 | orchestrator | 2026-02-18 03:25:12.939118 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-18 03:25:12.939127 | orchestrator | Wednesday 18 February 2026 03:25:04 +0000 (0:00:00.555) 0:00:01.991 **** 2026-02-18 03:25:12.939157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:12.939190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:12.939216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:12.939227 | orchestrator | 2026-02-18 03:25:12.939238 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-18 03:25:12.939248 | orchestrator | Wednesday 18 February 2026 03:25:07 +0000 (0:00:02.688) 0:00:04.679 **** 2026-02-18 03:25:12.939260 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:25:12.939272 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:25:12.939282 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:25:12.939293 | orchestrator | 2026-02-18 03:25:12.939304 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-18 03:25:12.939315 | orchestrator | Wednesday 18 February 2026 03:25:08 +0000 (0:00:00.651) 0:00:05.331 **** 2026-02-18 03:25:12.939326 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:25:12.939337 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:25:12.939348 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:25:12.939359 | orchestrator | 2026-02-18 03:25:12.939371 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-18 03:25:12.939382 | orchestrator | Wednesday 18 February 2026 03:25:09 +0000 (0:00:01.464) 0:00:06.795 **** 2026-02-18 03:25:12.939402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:20.883099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:20.883205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:20.883238 | orchestrator | 2026-02-18 03:25:20.883250 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-18 03:25:20.883262 | orchestrator | Wednesday 18 February 2026 03:25:12 +0000 (0:00:03.247) 0:00:10.043 **** 2026-02-18 03:25:20.883271 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:25:20.883281 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:25:20.883291 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:25:20.883300 | orchestrator | 2026-02-18 03:25:20.883310 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-18 03:25:20.883334 | orchestrator | Wednesday 18 February 2026 03:25:14 +0000 (0:00:01.120) 0:00:11.164 **** 2026-02-18 03:25:20.883345 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:25:20.883354 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:25:20.883363 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:25:20.883373 | orchestrator | 2026-02-18 03:25:20.883382 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 03:25:20.883392 | orchestrator | Wednesday 18 February 2026 03:25:18 +0000 (0:00:03.976) 0:00:15.141 **** 2026-02-18 03:25:20.883402 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:25:20.883411 | orchestrator | 2026-02-18 03:25:20.883420 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-18 03:25:20.883429 | orchestrator | Wednesday 18 February 2026 03:25:18 +0000 (0:00:00.571) 0:00:15.712 **** 2026-02-18 03:25:20.883446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:20.883464 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:25:20.883480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:25.968412 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:25:25.968533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:25.968567 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:25:25.968576 | orchestrator | 2026-02-18 03:25:25.968583 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-18 03:25:25.968591 | orchestrator | Wednesday 18 February 2026 03:25:20 +0000 (0:00:02.271) 0:00:17.984 **** 2026-02-18 03:25:25.968650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:25.968661 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:25:25.968689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:25.968705 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:25:25.968712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:25.968719 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:25:25.968726 | orchestrator | 2026-02-18 03:25:25.968733 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-18 03:25:25.968739 | orchestrator | Wednesday 18 February 2026 03:25:23 +0000 (0:00:02.543) 0:00:20.527 **** 2026-02-18 03:25:25.968755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:29.150519 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:25:29.150628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:29.150650 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:25:29.150681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 03:25:29.150717 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:25:29.150729 | orchestrator | 2026-02-18 03:25:29.150742 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-18 03:25:29.150755 | orchestrator | Wednesday 18 February 2026 03:25:25 +0000 (0:00:02.549) 0:00:23.076 **** 2026-02-18 03:25:29.150787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:29.150802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:25:29.150865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 03:27:48.638295 | orchestrator | 2026-02-18 03:27:48.638379 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-18 03:27:48.638388 | orchestrator | Wednesday 18 February 2026 03:25:29 +0000 (0:00:03.183) 0:00:26.260 **** 2026-02-18 03:27:48.638392 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:27:48.638397 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:27:48.638402 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:27:48.638406 | orchestrator | 2026-02-18 03:27:48.638410 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-18 03:27:48.638414 | orchestrator | Wednesday 18 February 2026 03:25:30 +0000 (0:00:00.923) 0:00:27.184 **** 2026-02-18 03:27:48.638418 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.638423 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:27:48.638427 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:27:48.638431 | orchestrator | 2026-02-18 03:27:48.638435 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-18 03:27:48.638439 | orchestrator | Wednesday 18 February 2026 03:25:30 +0000 (0:00:00.588) 0:00:27.773 **** 2026-02-18 03:27:48.638443 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.638446 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:27:48.638450 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:27:48.638454 | orchestrator | 2026-02-18 03:27:48.638458 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-18 03:27:48.638462 | orchestrator | Wednesday 18 February 2026 03:25:31 +0000 (0:00:00.368) 0:00:28.142 **** 2026-02-18 03:27:48.638467 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-18 03:27:48.638473 | orchestrator | ...ignoring 2026-02-18 03:27:48.638477 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-18 03:27:48.638481 | orchestrator | ...ignoring 2026-02-18 03:27:48.638485 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-18 03:27:48.638489 | orchestrator | ...ignoring 2026-02-18 03:27:48.638508 | orchestrator | 2026-02-18 03:27:48.638512 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-18 03:27:48.638516 | orchestrator | Wednesday 18 February 2026 03:25:41 +0000 (0:00:10.930) 0:00:39.072 **** 2026-02-18 03:27:48.638520 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.638524 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:27:48.638527 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:27:48.638531 | orchestrator | 2026-02-18 03:27:48.638535 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-18 03:27:48.638548 | orchestrator | Wednesday 18 February 2026 03:25:42 +0000 (0:00:00.511) 0:00:39.584 **** 2026-02-18 03:27:48.638552 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:27:48.638556 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:27:48.638560 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:27:48.638564 | orchestrator | 2026-02-18 03:27:48.638567 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-18 03:27:48.638571 | orchestrator | Wednesday 18 February 2026 03:25:43 +0000 (0:00:00.682) 0:00:40.266 **** 2026-02-18 03:27:48.638575 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:27:48.638579 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:27:48.638583 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:27:48.638587 | orchestrator | 2026-02-18 03:27:48.638600 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-18 03:27:48.638605 | orchestrator | Wednesday 18 February 2026 03:25:43 +0000 (0:00:00.457) 0:00:40.723 **** 2026-02-18 03:27:48.638609 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:27:48.638613 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:27:48.638616 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:27:48.638620 | orchestrator | 2026-02-18 03:27:48.638624 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-18 03:27:48.638628 | orchestrator | Wednesday 18 February 2026 03:25:44 +0000 (0:00:00.448) 0:00:41.172 **** 2026-02-18 03:27:48.638631 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.638635 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:27:48.638639 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:27:48.638643 | orchestrator | 2026-02-18 03:27:48.638647 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-18 03:27:48.638651 | orchestrator | Wednesday 18 February 2026 03:25:44 +0000 (0:00:00.468) 0:00:41.640 **** 2026-02-18 03:27:48.638655 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:27:48.638659 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:27:48.638662 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:27:48.638666 | orchestrator | 2026-02-18 03:27:48.638670 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 03:27:48.638674 | orchestrator | Wednesday 18 February 2026 03:25:45 +0000 (0:00:01.050) 0:00:42.691 **** 2026-02-18 03:27:48.638678 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:27:48.638681 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:27:48.638685 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-18 03:27:48.638689 | orchestrator | 2026-02-18 03:27:48.638693 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-18 03:27:48.638697 | orchestrator | Wednesday 18 February 2026 03:25:45 +0000 (0:00:00.400) 0:00:43.092 **** 2026-02-18 03:27:48.638700 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:27:48.638704 | orchestrator | 2026-02-18 03:27:48.638708 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-18 03:27:48.638712 | orchestrator | Wednesday 18 February 2026 03:25:56 +0000 (0:00:10.679) 0:00:53.771 **** 2026-02-18 03:27:48.638716 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.638719 | orchestrator | 2026-02-18 03:27:48.638723 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 03:27:48.638727 | orchestrator | Wednesday 18 February 2026 03:25:56 +0000 (0:00:00.151) 0:00:53.922 **** 2026-02-18 03:27:48.638731 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:27:48.638750 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:27:48.638754 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:27:48.638758 | orchestrator | 2026-02-18 03:27:48.638762 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-18 03:27:48.638766 | orchestrator | Wednesday 18 February 2026 03:25:57 +0000 (0:00:00.998) 0:00:54.920 **** 2026-02-18 03:27:48.638770 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:27:48.638774 | orchestrator | 2026-02-18 03:27:48.638777 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-18 03:27:48.638781 | orchestrator | Wednesday 18 February 2026 03:26:05 +0000 (0:00:08.059) 0:01:02.980 **** 2026-02-18 03:27:48.638785 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.638789 | orchestrator | 2026-02-18 03:27:48.638793 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-18 03:27:48.638797 | orchestrator | Wednesday 18 February 2026 03:26:07 +0000 (0:00:01.674) 0:01:04.655 **** 2026-02-18 03:27:48.638801 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.638805 | orchestrator | 2026-02-18 03:27:48.638809 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-18 03:27:48.638814 | orchestrator | Wednesday 18 February 2026 03:26:10 +0000 (0:00:02.573) 0:01:07.229 **** 2026-02-18 03:27:48.638818 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:27:48.638822 | orchestrator | 2026-02-18 03:27:48.638826 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-18 03:27:48.638830 | orchestrator | Wednesday 18 February 2026 03:26:10 +0000 (0:00:00.120) 0:01:07.350 **** 2026-02-18 03:27:48.638835 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:27:48.638839 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:27:48.638843 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:27:48.638847 | orchestrator | 2026-02-18 03:27:48.638852 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-18 03:27:48.638856 | orchestrator | Wednesday 18 February 2026 03:26:10 +0000 (0:00:00.353) 0:01:07.704 **** 2026-02-18 03:27:48.638861 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:27:48.638866 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-18 03:27:48.638871 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:27:48.638876 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:27:48.638977 | orchestrator | 2026-02-18 03:27:48.638984 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-18 03:27:48.638988 | orchestrator | skipping: no hosts matched 2026-02-18 03:27:48.638993 | orchestrator | 2026-02-18 03:27:48.638998 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-18 03:27:48.639003 | orchestrator | 2026-02-18 03:27:48.639008 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-18 03:27:48.639013 | orchestrator | Wednesday 18 February 2026 03:26:11 +0000 (0:00:00.609) 0:01:08.313 **** 2026-02-18 03:27:48.639018 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:27:48.639023 | orchestrator | 2026-02-18 03:27:48.639028 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-18 03:27:48.639033 | orchestrator | Wednesday 18 February 2026 03:26:29 +0000 (0:00:18.572) 0:01:26.885 **** 2026-02-18 03:27:48.639038 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:27:48.639043 | orchestrator | 2026-02-18 03:27:48.639048 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-18 03:27:48.639053 | orchestrator | Wednesday 18 February 2026 03:26:46 +0000 (0:00:16.645) 0:01:43.531 **** 2026-02-18 03:27:48.639058 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:27:48.639063 | orchestrator | 2026-02-18 03:27:48.639071 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-18 03:27:48.639076 | orchestrator | 2026-02-18 03:27:48.639084 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-18 03:27:48.639090 | orchestrator | Wednesday 18 February 2026 03:26:48 +0000 (0:00:02.467) 0:01:45.999 **** 2026-02-18 03:27:48.639100 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:27:48.639105 | orchestrator | 2026-02-18 03:27:48.639110 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-18 03:27:48.639114 | orchestrator | Wednesday 18 February 2026 03:27:08 +0000 (0:00:19.495) 0:02:05.494 **** 2026-02-18 03:27:48.639119 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:27:48.639124 | orchestrator | 2026-02-18 03:27:48.639129 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-18 03:27:48.639134 | orchestrator | Wednesday 18 February 2026 03:27:24 +0000 (0:00:16.548) 0:02:22.043 **** 2026-02-18 03:27:48.639139 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:27:48.639143 | orchestrator | 2026-02-18 03:27:48.639148 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-18 03:27:48.639153 | orchestrator | 2026-02-18 03:27:48.639158 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-18 03:27:48.639163 | orchestrator | Wednesday 18 February 2026 03:27:27 +0000 (0:00:02.540) 0:02:24.584 **** 2026-02-18 03:27:48.639168 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:27:48.639173 | orchestrator | 2026-02-18 03:27:48.639178 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-18 03:27:48.639183 | orchestrator | Wednesday 18 February 2026 03:27:39 +0000 (0:00:12.079) 0:02:36.664 **** 2026-02-18 03:27:48.639187 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.639194 | orchestrator | 2026-02-18 03:27:48.639201 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-18 03:27:48.639208 | orchestrator | Wednesday 18 February 2026 03:27:45 +0000 (0:00:05.575) 0:02:42.239 **** 2026-02-18 03:27:48.639215 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:27:48.639222 | orchestrator | 2026-02-18 03:27:48.639229 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-18 03:27:48.639236 | orchestrator | 2026-02-18 03:27:48.639243 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-18 03:27:48.639250 | orchestrator | Wednesday 18 February 2026 03:27:47 +0000 (0:00:02.804) 0:02:45.044 **** 2026-02-18 03:27:48.639257 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:27:48.639263 | orchestrator | 2026-02-18 03:27:48.639270 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-18 03:27:48.639283 | orchestrator | Wednesday 18 February 2026 03:27:48 +0000 (0:00:00.693) 0:02:45.738 **** 2026-02-18 03:28:01.516546 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:28:01.516674 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:28:01.516689 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:28:01.516700 | orchestrator | 2026-02-18 03:28:01.516712 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-18 03:28:01.516723 | orchestrator | Wednesday 18 February 2026 03:27:50 +0000 (0:00:02.340) 0:02:48.079 **** 2026-02-18 03:28:01.516733 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:28:01.516743 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:28:01.516752 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:28:01.516762 | orchestrator | 2026-02-18 03:28:01.516771 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-18 03:28:01.516781 | orchestrator | Wednesday 18 February 2026 03:27:53 +0000 (0:00:02.134) 0:02:50.213 **** 2026-02-18 03:28:01.516791 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:28:01.516800 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:28:01.516810 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:28:01.516819 | orchestrator | 2026-02-18 03:28:01.516829 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-18 03:28:01.516838 | orchestrator | Wednesday 18 February 2026 03:27:55 +0000 (0:00:02.403) 0:02:52.617 **** 2026-02-18 03:28:01.516850 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:28:01.516867 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:28:01.516911 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:28:01.516971 | orchestrator | 2026-02-18 03:28:01.516988 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-18 03:28:01.517004 | orchestrator | Wednesday 18 February 2026 03:27:57 +0000 (0:00:02.169) 0:02:54.786 **** 2026-02-18 03:28:01.517021 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:28:01.517038 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:28:01.517055 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:28:01.517072 | orchestrator | 2026-02-18 03:28:01.517089 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-18 03:28:01.517104 | orchestrator | Wednesday 18 February 2026 03:28:00 +0000 (0:00:03.064) 0:02:57.851 **** 2026-02-18 03:28:01.517116 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:01.517128 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:28:01.517140 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:28:01.517151 | orchestrator | 2026-02-18 03:28:01.517162 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:28:01.517174 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-18 03:28:01.517188 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-18 03:28:01.517199 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-18 03:28:01.517210 | orchestrator | 2026-02-18 03:28:01.517222 | orchestrator | 2026-02-18 03:28:01.517233 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:28:01.517245 | orchestrator | Wednesday 18 February 2026 03:28:01 +0000 (0:00:00.422) 0:02:58.274 **** 2026-02-18 03:28:01.517256 | orchestrator | =============================================================================== 2026-02-18 03:28:01.517291 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.07s 2026-02-18 03:28:01.517303 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.19s 2026-02-18 03:28:01.517314 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.08s 2026-02-18 03:28:01.517325 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.93s 2026-02-18 03:28:01.517336 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.68s 2026-02-18 03:28:01.517347 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.06s 2026-02-18 03:28:01.517359 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.58s 2026-02-18 03:28:01.517371 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.01s 2026-02-18 03:28:01.517382 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.98s 2026-02-18 03:28:01.517394 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.25s 2026-02-18 03:28:01.517405 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.18s 2026-02-18 03:28:01.517415 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.06s 2026-02-18 03:28:01.517425 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2026-02-18 03:28:01.517434 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.69s 2026-02-18 03:28:01.517444 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.57s 2026-02-18 03:28:01.517453 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.55s 2026-02-18 03:28:01.517463 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.54s 2026-02-18 03:28:01.517480 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.40s 2026-02-18 03:28:01.517495 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.34s 2026-02-18 03:28:01.517523 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.27s 2026-02-18 03:28:03.914355 | orchestrator | 2026-02-18 03:28:03 | INFO  | Task 073ff1db-93db-45fe-bec9-9ab0d4b11a44 (rabbitmq) was prepared for execution. 2026-02-18 03:28:03.914458 | orchestrator | 2026-02-18 03:28:03 | INFO  | It takes a moment until task 073ff1db-93db-45fe-bec9-9ab0d4b11a44 (rabbitmq) has been started and output is visible here. 2026-02-18 03:28:17.257991 | orchestrator | 2026-02-18 03:28:17.258207 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:28:17.258240 | orchestrator | 2026-02-18 03:28:17.258260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:28:17.258278 | orchestrator | Wednesday 18 February 2026 03:28:08 +0000 (0:00:00.194) 0:00:00.194 **** 2026-02-18 03:28:17.258296 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:28:17.258315 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:28:17.258334 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:28:17.258352 | orchestrator | 2026-02-18 03:28:17.258370 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:28:17.258388 | orchestrator | Wednesday 18 February 2026 03:28:08 +0000 (0:00:00.322) 0:00:00.517 **** 2026-02-18 03:28:17.258406 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-18 03:28:17.258424 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-18 03:28:17.258445 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-18 03:28:17.258463 | orchestrator | 2026-02-18 03:28:17.258482 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-18 03:28:17.258503 | orchestrator | 2026-02-18 03:28:17.258522 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-18 03:28:17.258540 | orchestrator | Wednesday 18 February 2026 03:28:09 +0000 (0:00:00.574) 0:00:01.092 **** 2026-02-18 03:28:17.258561 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:28:17.258582 | orchestrator | 2026-02-18 03:28:17.258602 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-18 03:28:17.258620 | orchestrator | Wednesday 18 February 2026 03:28:09 +0000 (0:00:00.540) 0:00:01.633 **** 2026-02-18 03:28:17.258639 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:28:17.258659 | orchestrator | 2026-02-18 03:28:17.258679 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-18 03:28:17.258698 | orchestrator | Wednesday 18 February 2026 03:28:10 +0000 (0:00:00.979) 0:00:02.612 **** 2026-02-18 03:28:17.258717 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:17.258736 | orchestrator | 2026-02-18 03:28:17.258755 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-18 03:28:17.258773 | orchestrator | Wednesday 18 February 2026 03:28:10 +0000 (0:00:00.363) 0:00:02.975 **** 2026-02-18 03:28:17.258793 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:17.258812 | orchestrator | 2026-02-18 03:28:17.258830 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-18 03:28:17.258846 | orchestrator | Wednesday 18 February 2026 03:28:11 +0000 (0:00:00.356) 0:00:03.331 **** 2026-02-18 03:28:17.258857 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:17.258868 | orchestrator | 2026-02-18 03:28:17.258878 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-18 03:28:17.258889 | orchestrator | Wednesday 18 February 2026 03:28:11 +0000 (0:00:00.359) 0:00:03.691 **** 2026-02-18 03:28:17.258975 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:17.258986 | orchestrator | 2026-02-18 03:28:17.258996 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-18 03:28:17.259006 | orchestrator | Wednesday 18 February 2026 03:28:12 +0000 (0:00:00.559) 0:00:04.251 **** 2026-02-18 03:28:17.259039 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:28:17.259086 | orchestrator | 2026-02-18 03:28:17.259104 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-18 03:28:17.259121 | orchestrator | Wednesday 18 February 2026 03:28:13 +0000 (0:00:00.893) 0:00:05.144 **** 2026-02-18 03:28:17.259136 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:28:17.259153 | orchestrator | 2026-02-18 03:28:17.259170 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-18 03:28:17.259187 | orchestrator | Wednesday 18 February 2026 03:28:13 +0000 (0:00:00.858) 0:00:06.003 **** 2026-02-18 03:28:17.259204 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:17.259222 | orchestrator | 2026-02-18 03:28:17.259239 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-18 03:28:17.259256 | orchestrator | Wednesday 18 February 2026 03:28:14 +0000 (0:00:00.386) 0:00:06.390 **** 2026-02-18 03:28:17.259274 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:17.259292 | orchestrator | 2026-02-18 03:28:17.259310 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-18 03:28:17.259324 | orchestrator | Wednesday 18 February 2026 03:28:14 +0000 (0:00:00.397) 0:00:06.787 **** 2026-02-18 03:28:17.259364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:17.259381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:17.259401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:17.259424 | orchestrator | 2026-02-18 03:28:17.259434 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-18 03:28:17.259444 | orchestrator | Wednesday 18 February 2026 03:28:15 +0000 (0:00:00.809) 0:00:07.597 **** 2026-02-18 03:28:17.259454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:17.259474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:35.894455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:35.894563 | orchestrator | 2026-02-18 03:28:35.894581 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-18 03:28:35.894618 | orchestrator | Wednesday 18 February 2026 03:28:17 +0000 (0:00:01.672) 0:00:09.269 **** 2026-02-18 03:28:35.894630 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-18 03:28:35.894642 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-18 03:28:35.894653 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-18 03:28:35.894664 | orchestrator | 2026-02-18 03:28:35.894675 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-18 03:28:35.894686 | orchestrator | Wednesday 18 February 2026 03:28:18 +0000 (0:00:01.459) 0:00:10.729 **** 2026-02-18 03:28:35.894712 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-18 03:28:35.894724 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-18 03:28:35.894735 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-18 03:28:35.894745 | orchestrator | 2026-02-18 03:28:35.894756 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-18 03:28:35.894767 | orchestrator | Wednesday 18 February 2026 03:28:20 +0000 (0:00:01.722) 0:00:12.452 **** 2026-02-18 03:28:35.894778 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-18 03:28:35.894788 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-18 03:28:35.894799 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-18 03:28:35.894810 | orchestrator | 2026-02-18 03:28:35.894820 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-18 03:28:35.894831 | orchestrator | Wednesday 18 February 2026 03:28:21 +0000 (0:00:01.404) 0:00:13.856 **** 2026-02-18 03:28:35.894842 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-18 03:28:35.894853 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-18 03:28:35.894863 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-18 03:28:35.894874 | orchestrator | 2026-02-18 03:28:35.894885 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-18 03:28:35.894896 | orchestrator | Wednesday 18 February 2026 03:28:23 +0000 (0:00:01.667) 0:00:15.523 **** 2026-02-18 03:28:35.894952 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-18 03:28:35.894966 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-18 03:28:35.894977 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-18 03:28:35.894990 | orchestrator | 2026-02-18 03:28:35.895003 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-18 03:28:35.895015 | orchestrator | Wednesday 18 February 2026 03:28:24 +0000 (0:00:01.348) 0:00:16.871 **** 2026-02-18 03:28:35.895029 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-18 03:28:35.895041 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-18 03:28:35.895079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-18 03:28:35.895091 | orchestrator | 2026-02-18 03:28:35.895104 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-18 03:28:35.895116 | orchestrator | Wednesday 18 February 2026 03:28:26 +0000 (0:00:01.385) 0:00:18.257 **** 2026-02-18 03:28:35.895128 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:28:35.895142 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:28:35.895172 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:28:35.895194 | orchestrator | 2026-02-18 03:28:35.895207 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-18 03:28:35.895219 | orchestrator | Wednesday 18 February 2026 03:28:26 +0000 (0:00:00.396) 0:00:18.654 **** 2026-02-18 03:28:35.895234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:35.895256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:35.895270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 03:28:35.895282 | orchestrator | 2026-02-18 03:28:35.895293 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-18 03:28:35.895304 | orchestrator | Wednesday 18 February 2026 03:28:27 +0000 (0:00:01.241) 0:00:19.895 **** 2026-02-18 03:28:35.895315 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:28:35.895325 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:28:35.895336 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:28:35.895347 | orchestrator | 2026-02-18 03:28:35.895358 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-18 03:28:35.895376 | orchestrator | Wednesday 18 February 2026 03:28:28 +0000 (0:00:00.819) 0:00:20.715 **** 2026-02-18 03:28:35.895387 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:28:35.895398 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:28:35.895409 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:28:35.895419 | orchestrator | 2026-02-18 03:28:35.895430 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-18 03:28:35.895447 | orchestrator | Wednesday 18 February 2026 03:28:35 +0000 (0:00:07.190) 0:00:27.905 **** 2026-02-18 03:30:08.953685 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:30:08.953791 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:30:08.953801 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:30:08.953808 | orchestrator | 2026-02-18 03:30:08.953816 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-18 03:30:08.953824 | orchestrator | 2026-02-18 03:30:08.953830 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-18 03:30:08.953837 | orchestrator | Wednesday 18 February 2026 03:28:36 +0000 (0:00:00.532) 0:00:28.437 **** 2026-02-18 03:30:08.953843 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:30:08.953850 | orchestrator | 2026-02-18 03:30:08.953857 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-18 03:30:08.953863 | orchestrator | Wednesday 18 February 2026 03:28:37 +0000 (0:00:00.613) 0:00:29.051 **** 2026-02-18 03:30:08.953869 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:30:08.953875 | orchestrator | 2026-02-18 03:30:08.953881 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-18 03:30:08.953887 | orchestrator | Wednesday 18 February 2026 03:28:37 +0000 (0:00:00.253) 0:00:29.304 **** 2026-02-18 03:30:08.953893 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:30:08.953900 | orchestrator | 2026-02-18 03:30:08.953906 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-18 03:30:08.953912 | orchestrator | Wednesday 18 February 2026 03:28:43 +0000 (0:00:06.641) 0:00:35.946 **** 2026-02-18 03:30:08.953918 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:30:08.953925 | orchestrator | 2026-02-18 03:30:08.953931 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-18 03:30:08.953937 | orchestrator | 2026-02-18 03:30:08.954061 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-18 03:30:08.954069 | orchestrator | Wednesday 18 February 2026 03:29:33 +0000 (0:00:49.519) 0:01:25.465 **** 2026-02-18 03:30:08.954075 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:30:08.954081 | orchestrator | 2026-02-18 03:30:08.954088 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-18 03:30:08.954094 | orchestrator | Wednesday 18 February 2026 03:29:33 +0000 (0:00:00.556) 0:01:26.022 **** 2026-02-18 03:30:08.954100 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:30:08.954106 | orchestrator | 2026-02-18 03:30:08.954112 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-18 03:30:08.954119 | orchestrator | Wednesday 18 February 2026 03:29:34 +0000 (0:00:00.247) 0:01:26.269 **** 2026-02-18 03:30:08.954125 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:30:08.954131 | orchestrator | 2026-02-18 03:30:08.954137 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-18 03:30:08.954157 | orchestrator | Wednesday 18 February 2026 03:29:35 +0000 (0:00:01.513) 0:01:27.782 **** 2026-02-18 03:30:08.954163 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:30:08.954169 | orchestrator | 2026-02-18 03:30:08.954175 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-18 03:30:08.954182 | orchestrator | 2026-02-18 03:30:08.954188 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-18 03:30:08.954194 | orchestrator | Wednesday 18 February 2026 03:29:50 +0000 (0:00:14.784) 0:01:42.567 **** 2026-02-18 03:30:08.954200 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:30:08.954206 | orchestrator | 2026-02-18 03:30:08.954228 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-18 03:30:08.954236 | orchestrator | Wednesday 18 February 2026 03:29:51 +0000 (0:00:00.720) 0:01:43.287 **** 2026-02-18 03:30:08.954244 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:30:08.954251 | orchestrator | 2026-02-18 03:30:08.954258 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-18 03:30:08.954266 | orchestrator | Wednesday 18 February 2026 03:29:51 +0000 (0:00:00.260) 0:01:43.548 **** 2026-02-18 03:30:08.954273 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:30:08.954280 | orchestrator | 2026-02-18 03:30:08.954288 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-18 03:30:08.954295 | orchestrator | Wednesday 18 February 2026 03:29:57 +0000 (0:00:06.476) 0:01:50.025 **** 2026-02-18 03:30:08.954302 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:30:08.954309 | orchestrator | 2026-02-18 03:30:08.954316 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-18 03:30:08.954323 | orchestrator | 2026-02-18 03:30:08.954330 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-18 03:30:08.954340 | orchestrator | Wednesday 18 February 2026 03:30:05 +0000 (0:00:07.676) 0:01:57.701 **** 2026-02-18 03:30:08.954350 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:30:08.954366 | orchestrator | 2026-02-18 03:30:08.954379 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-18 03:30:08.954388 | orchestrator | Wednesday 18 February 2026 03:30:06 +0000 (0:00:00.491) 0:01:58.192 **** 2026-02-18 03:30:08.954398 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-18 03:30:08.954408 | orchestrator | enable_outward_rabbitmq_True 2026-02-18 03:30:08.954419 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-18 03:30:08.954428 | orchestrator | outward_rabbitmq_restart 2026-02-18 03:30:08.954439 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:30:08.954448 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:30:08.954457 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:30:08.954549 | orchestrator | 2026-02-18 03:30:08.954563 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-18 03:30:08.954574 | orchestrator | skipping: no hosts matched 2026-02-18 03:30:08.954583 | orchestrator | 2026-02-18 03:30:08.954590 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-18 03:30:08.954596 | orchestrator | skipping: no hosts matched 2026-02-18 03:30:08.954603 | orchestrator | 2026-02-18 03:30:08.954609 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-18 03:30:08.954615 | orchestrator | skipping: no hosts matched 2026-02-18 03:30:08.954621 | orchestrator | 2026-02-18 03:30:08.954627 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:30:08.954651 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-18 03:30:08.954659 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:30:08.954666 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:30:08.954672 | orchestrator | 2026-02-18 03:30:08.954678 | orchestrator | 2026-02-18 03:30:08.954684 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:30:08.954690 | orchestrator | Wednesday 18 February 2026 03:30:08 +0000 (0:00:02.398) 0:02:00.591 **** 2026-02-18 03:30:08.954696 | orchestrator | =============================================================================== 2026-02-18 03:30:08.954702 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 71.98s 2026-02-18 03:30:08.954709 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.63s 2026-02-18 03:30:08.954724 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.19s 2026-02-18 03:30:08.954730 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.40s 2026-02-18 03:30:08.954736 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.89s 2026-02-18 03:30:08.954742 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.72s 2026-02-18 03:30:08.954748 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.67s 2026-02-18 03:30:08.954754 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.67s 2026-02-18 03:30:08.954761 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.46s 2026-02-18 03:30:08.954767 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.40s 2026-02-18 03:30:08.954773 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.39s 2026-02-18 03:30:08.954779 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.35s 2026-02-18 03:30:08.954785 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.24s 2026-02-18 03:30:08.954791 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2026-02-18 03:30:08.954803 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.89s 2026-02-18 03:30:08.954810 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.86s 2026-02-18 03:30:08.954816 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.82s 2026-02-18 03:30:08.954822 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.81s 2026-02-18 03:30:08.954828 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.76s 2026-02-18 03:30:08.954835 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-02-18 03:30:11.487054 | orchestrator | 2026-02-18 03:30:11 | INFO  | Task 6cbfab05-d080-411a-98d8-81d716180cac (openvswitch) was prepared for execution. 2026-02-18 03:30:11.487140 | orchestrator | 2026-02-18 03:30:11 | INFO  | It takes a moment until task 6cbfab05-d080-411a-98d8-81d716180cac (openvswitch) has been started and output is visible here. 2026-02-18 03:30:24.721705 | orchestrator | 2026-02-18 03:30:24.721837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:30:24.721849 | orchestrator | 2026-02-18 03:30:24.721856 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:30:24.721863 | orchestrator | Wednesday 18 February 2026 03:30:15 +0000 (0:00:00.270) 0:00:00.270 **** 2026-02-18 03:30:24.721870 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:30:24.721879 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:30:24.721886 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:30:24.721892 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:30:24.721899 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:30:24.721905 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:30:24.721912 | orchestrator | 2026-02-18 03:30:24.721919 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:30:24.721925 | orchestrator | Wednesday 18 February 2026 03:30:16 +0000 (0:00:00.745) 0:00:01.016 **** 2026-02-18 03:30:24.721932 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 03:30:24.721940 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 03:30:24.721947 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 03:30:24.722066 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 03:30:24.722072 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 03:30:24.722078 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 03:30:24.722113 | orchestrator | 2026-02-18 03:30:24.722119 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-18 03:30:24.722125 | orchestrator | 2026-02-18 03:30:24.722133 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-18 03:30:24.722139 | orchestrator | Wednesday 18 February 2026 03:30:17 +0000 (0:00:00.689) 0:00:01.706 **** 2026-02-18 03:30:24.722146 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:30:24.722154 | orchestrator | 2026-02-18 03:30:24.722161 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-18 03:30:24.722167 | orchestrator | Wednesday 18 February 2026 03:30:18 +0000 (0:00:01.211) 0:00:02.917 **** 2026-02-18 03:30:24.722173 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-18 03:30:24.722180 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-18 03:30:24.722186 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-18 03:30:24.722193 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-18 03:30:24.722199 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-18 03:30:24.722205 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-18 03:30:24.722211 | orchestrator | 2026-02-18 03:30:24.722227 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-18 03:30:24.722234 | orchestrator | Wednesday 18 February 2026 03:30:19 +0000 (0:00:01.207) 0:00:04.125 **** 2026-02-18 03:30:24.722241 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-18 03:30:24.722247 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-18 03:30:24.722253 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-18 03:30:24.722259 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-18 03:30:24.722265 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-18 03:30:24.722272 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-18 03:30:24.722278 | orchestrator | 2026-02-18 03:30:24.722284 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-18 03:30:24.722290 | orchestrator | Wednesday 18 February 2026 03:30:21 +0000 (0:00:01.492) 0:00:05.617 **** 2026-02-18 03:30:24.722296 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-18 03:30:24.722303 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:30:24.722311 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-18 03:30:24.722318 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:30:24.722324 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-18 03:30:24.722331 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:30:24.722338 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-18 03:30:24.722344 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:30:24.722351 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-18 03:30:24.722357 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:30:24.722364 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-18 03:30:24.722371 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:30:24.722377 | orchestrator | 2026-02-18 03:30:24.722384 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-18 03:30:24.722391 | orchestrator | Wednesday 18 February 2026 03:30:22 +0000 (0:00:01.242) 0:00:06.860 **** 2026-02-18 03:30:24.722397 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:30:24.722404 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:30:24.722411 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:30:24.722417 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:30:24.722423 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:30:24.722429 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:30:24.722435 | orchestrator | 2026-02-18 03:30:24.722441 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-18 03:30:24.722455 | orchestrator | Wednesday 18 February 2026 03:30:23 +0000 (0:00:00.808) 0:00:07.668 **** 2026-02-18 03:30:24.722484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:24.722497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:24.722505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:24.722570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:24.722581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:24.722593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974297 | orchestrator | 2026-02-18 03:30:26.974305 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-18 03:30:26.974314 | orchestrator | Wednesday 18 February 2026 03:30:24 +0000 (0:00:01.437) 0:00:09.105 **** 2026-02-18 03:30:26.974322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:26.974380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:29.609942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610275 | orchestrator | 2026-02-18 03:30:29.610285 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-18 03:30:29.610296 | orchestrator | Wednesday 18 February 2026 03:30:27 +0000 (0:00:02.254) 0:00:11.360 **** 2026-02-18 03:30:29.610306 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:30:29.610322 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:30:29.610341 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:30:29.610361 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:30:29.610375 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:30:29.610388 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:30:29.610402 | orchestrator | 2026-02-18 03:30:29.610417 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-18 03:30:29.610430 | orchestrator | Wednesday 18 February 2026 03:30:28 +0000 (0:00:01.002) 0:00:12.363 **** 2026-02-18 03:30:29.610444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:29.610538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 03:30:55.377821 | orchestrator | 2026-02-18 03:30:55.377826 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 03:30:55.377832 | orchestrator | Wednesday 18 February 2026 03:30:29 +0000 (0:00:01.640) 0:00:14.004 **** 2026-02-18 03:30:55.377835 | orchestrator | 2026-02-18 03:30:55.377839 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 03:30:55.377843 | orchestrator | Wednesday 18 February 2026 03:30:30 +0000 (0:00:00.316) 0:00:14.320 **** 2026-02-18 03:30:55.377862 | orchestrator | 2026-02-18 03:30:55.377865 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 03:30:55.377869 | orchestrator | Wednesday 18 February 2026 03:30:30 +0000 (0:00:00.158) 0:00:14.479 **** 2026-02-18 03:30:55.377873 | orchestrator | 2026-02-18 03:30:55.377877 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 03:30:55.377880 | orchestrator | Wednesday 18 February 2026 03:30:30 +0000 (0:00:00.133) 0:00:14.613 **** 2026-02-18 03:30:55.377884 | orchestrator | 2026-02-18 03:30:55.377888 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 03:30:55.377892 | orchestrator | Wednesday 18 February 2026 03:30:30 +0000 (0:00:00.131) 0:00:14.745 **** 2026-02-18 03:30:55.377895 | orchestrator | 2026-02-18 03:30:55.377899 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 03:30:55.377903 | orchestrator | Wednesday 18 February 2026 03:30:30 +0000 (0:00:00.134) 0:00:14.880 **** 2026-02-18 03:30:55.377907 | orchestrator | 2026-02-18 03:30:55.377910 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-18 03:30:55.377914 | orchestrator | Wednesday 18 February 2026 03:30:30 +0000 (0:00:00.136) 0:00:15.017 **** 2026-02-18 03:30:55.377918 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:30:55.377923 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:30:55.377926 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:30:55.377930 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:30:55.377934 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:30:55.377938 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:30:55.377941 | orchestrator | 2026-02-18 03:30:55.377945 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-18 03:30:55.377950 | orchestrator | Wednesday 18 February 2026 03:30:39 +0000 (0:00:08.869) 0:00:23.887 **** 2026-02-18 03:30:55.377956 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:30:55.377961 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:30:55.377965 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:30:55.378007 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:30:55.378046 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:30:55.378050 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:30:55.378054 | orchestrator | 2026-02-18 03:30:55.378058 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-18 03:30:55.378062 | orchestrator | Wednesday 18 February 2026 03:30:40 +0000 (0:00:01.162) 0:00:25.049 **** 2026-02-18 03:30:55.378066 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:30:55.378070 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:30:55.378074 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:30:55.378077 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:30:55.378081 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:30:55.378085 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:30:55.378088 | orchestrator | 2026-02-18 03:30:55.378092 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-18 03:30:55.378096 | orchestrator | Wednesday 18 February 2026 03:30:48 +0000 (0:00:07.791) 0:00:32.840 **** 2026-02-18 03:30:55.378100 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-18 03:30:55.378105 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-18 03:30:55.378109 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-18 03:30:55.378112 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-18 03:30:55.378116 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-18 03:30:55.378120 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-18 03:30:55.378124 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-18 03:30:55.378136 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-18 03:31:08.711783 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-18 03:31:08.711886 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-18 03:31:08.711898 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-18 03:31:08.711908 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-18 03:31:08.711917 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 03:31:08.711925 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 03:31:08.711934 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 03:31:08.711942 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 03:31:08.711951 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 03:31:08.711959 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 03:31:08.712018 | orchestrator | 2026-02-18 03:31:08.712029 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-18 03:31:08.712039 | orchestrator | Wednesday 18 February 2026 03:30:55 +0000 (0:00:06.832) 0:00:39.672 **** 2026-02-18 03:31:08.712050 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-18 03:31:08.712059 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:31:08.712069 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-18 03:31:08.712077 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:31:08.712086 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-18 03:31:08.712094 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:31:08.712103 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-18 03:31:08.712112 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-18 03:31:08.712120 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-18 03:31:08.712129 | orchestrator | 2026-02-18 03:31:08.712137 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-18 03:31:08.712146 | orchestrator | Wednesday 18 February 2026 03:30:57 +0000 (0:00:02.500) 0:00:42.173 **** 2026-02-18 03:31:08.712155 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-18 03:31:08.712163 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:31:08.712172 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-18 03:31:08.712180 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:31:08.712189 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-18 03:31:08.712197 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:31:08.712206 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-18 03:31:08.712214 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-18 03:31:08.712238 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-18 03:31:08.712247 | orchestrator | 2026-02-18 03:31:08.712256 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-18 03:31:08.712264 | orchestrator | Wednesday 18 February 2026 03:31:01 +0000 (0:00:03.217) 0:00:45.390 **** 2026-02-18 03:31:08.712273 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:31:08.712281 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:31:08.712309 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:31:08.712318 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:31:08.712327 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:31:08.712337 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:31:08.712347 | orchestrator | 2026-02-18 03:31:08.712357 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:31:08.712368 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 03:31:08.712380 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 03:31:08.712390 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 03:31:08.712401 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 03:31:08.712411 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 03:31:08.712420 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 03:31:08.712430 | orchestrator | 2026-02-18 03:31:08.712441 | orchestrator | 2026-02-18 03:31:08.712451 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:31:08.712462 | orchestrator | Wednesday 18 February 2026 03:31:08 +0000 (0:00:07.207) 0:00:52.598 **** 2026-02-18 03:31:08.712486 | orchestrator | =============================================================================== 2026-02-18 03:31:08.712496 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.00s 2026-02-18 03:31:08.712507 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.87s 2026-02-18 03:31:08.712517 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.83s 2026-02-18 03:31:08.712527 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.22s 2026-02-18 03:31:08.712537 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.50s 2026-02-18 03:31:08.712547 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.25s 2026-02-18 03:31:08.712557 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.64s 2026-02-18 03:31:08.712567 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.49s 2026-02-18 03:31:08.712577 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.44s 2026-02-18 03:31:08.712587 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.24s 2026-02-18 03:31:08.712597 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.21s 2026-02-18 03:31:08.712607 | orchestrator | module-load : Load modules ---------------------------------------------- 1.21s 2026-02-18 03:31:08.712617 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.16s 2026-02-18 03:31:08.712627 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.01s 2026-02-18 03:31:08.712637 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.00s 2026-02-18 03:31:08.712648 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.81s 2026-02-18 03:31:08.712658 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2026-02-18 03:31:08.712668 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-02-18 03:31:11.259296 | orchestrator | 2026-02-18 03:31:11 | INFO  | Task b6d371b2-0de3-4e97-af2b-6a3f6a464f18 (ovn) was prepared for execution. 2026-02-18 03:31:11.259412 | orchestrator | 2026-02-18 03:31:11 | INFO  | It takes a moment until task b6d371b2-0de3-4e97-af2b-6a3f6a464f18 (ovn) has been started and output is visible here. 2026-02-18 03:31:22.305831 | orchestrator | 2026-02-18 03:31:22.305929 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:31:22.305944 | orchestrator | 2026-02-18 03:31:22.305954 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:31:22.305964 | orchestrator | Wednesday 18 February 2026 03:31:15 +0000 (0:00:00.183) 0:00:00.183 **** 2026-02-18 03:31:22.305974 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:31:22.306097 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:31:22.306112 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:31:22.306121 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:31:22.306131 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:31:22.306140 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:31:22.306150 | orchestrator | 2026-02-18 03:31:22.306160 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:31:22.306170 | orchestrator | Wednesday 18 February 2026 03:31:16 +0000 (0:00:00.827) 0:00:01.011 **** 2026-02-18 03:31:22.306196 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-18 03:31:22.306207 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-18 03:31:22.306216 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-18 03:31:22.306226 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-18 03:31:22.306235 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-18 03:31:22.306245 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-18 03:31:22.306254 | orchestrator | 2026-02-18 03:31:22.306264 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-18 03:31:22.306274 | orchestrator | 2026-02-18 03:31:22.306284 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-18 03:31:22.306293 | orchestrator | Wednesday 18 February 2026 03:31:17 +0000 (0:00:00.837) 0:00:01.849 **** 2026-02-18 03:31:22.306304 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:31:22.306314 | orchestrator | 2026-02-18 03:31:22.306324 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-18 03:31:22.306334 | orchestrator | Wednesday 18 February 2026 03:31:18 +0000 (0:00:01.145) 0:00:02.995 **** 2026-02-18 03:31:22.306345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306455 | orchestrator | 2026-02-18 03:31:22.306466 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-18 03:31:22.306477 | orchestrator | Wednesday 18 February 2026 03:31:19 +0000 (0:00:01.187) 0:00:04.182 **** 2026-02-18 03:31:22.306495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306572 | orchestrator | 2026-02-18 03:31:22.306584 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-18 03:31:22.306595 | orchestrator | Wednesday 18 February 2026 03:31:21 +0000 (0:00:01.556) 0:00:05.738 **** 2026-02-18 03:31:22.306607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:22.306638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399435 | orchestrator | 2026-02-18 03:31:47.399448 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-18 03:31:47.399460 | orchestrator | Wednesday 18 February 2026 03:31:22 +0000 (0:00:01.180) 0:00:06.918 **** 2026-02-18 03:31:47.399472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399580 | orchestrator | 2026-02-18 03:31:47.399591 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-18 03:31:47.399602 | orchestrator | Wednesday 18 February 2026 03:31:23 +0000 (0:00:01.536) 0:00:08.454 **** 2026-02-18 03:31:47.399620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:31:47.399695 | orchestrator | 2026-02-18 03:31:47.399706 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-18 03:31:47.399718 | orchestrator | Wednesday 18 February 2026 03:31:25 +0000 (0:00:01.412) 0:00:09.867 **** 2026-02-18 03:31:47.399730 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:31:47.399742 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:31:47.399755 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:31:47.399767 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:31:47.399780 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:31:47.399791 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:31:47.399804 | orchestrator | 2026-02-18 03:31:47.399817 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-18 03:31:47.399829 | orchestrator | Wednesday 18 February 2026 03:31:27 +0000 (0:00:02.570) 0:00:12.437 **** 2026-02-18 03:31:47.399842 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-18 03:31:47.399855 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-18 03:31:47.399867 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-18 03:31:47.399880 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-18 03:31:47.399892 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-18 03:31:47.399905 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-18 03:31:47.399925 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 03:32:26.351499 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 03:32:26.351623 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 03:32:26.351665 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 03:32:26.351681 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 03:32:26.351694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 03:32:26.351706 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-18 03:32:26.351721 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-18 03:32:26.351759 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-18 03:32:26.351772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-18 03:32:26.351786 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-18 03:32:26.351798 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-18 03:32:26.351812 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 03:32:26.351826 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 03:32:26.351839 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 03:32:26.351850 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 03:32:26.351859 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 03:32:26.351867 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 03:32:26.351875 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 03:32:26.351883 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 03:32:26.351890 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 03:32:26.351898 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 03:32:26.351905 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 03:32:26.351913 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 03:32:26.351921 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 03:32:26.351929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 03:32:26.351936 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 03:32:26.351944 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 03:32:26.351951 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 03:32:26.351959 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 03:32:26.351967 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-18 03:32:26.351975 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-18 03:32:26.351983 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-18 03:32:26.351990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-18 03:32:26.351998 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-18 03:32:26.352006 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-18 03:32:26.352013 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-18 03:32:26.352080 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-18 03:32:26.352090 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-18 03:32:26.352104 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-18 03:32:26.352113 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-18 03:32:26.352120 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-18 03:32:26.352128 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-18 03:32:26.352135 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-18 03:32:26.352143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-18 03:32:26.352151 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-18 03:32:26.352159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-18 03:32:26.352166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-18 03:32:26.352174 | orchestrator | 2026-02-18 03:32:26.352182 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 03:32:26.352190 | orchestrator | Wednesday 18 February 2026 03:31:46 +0000 (0:00:18.967) 0:00:31.404 **** 2026-02-18 03:32:26.352198 | orchestrator | 2026-02-18 03:32:26.352206 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 03:32:26.352213 | orchestrator | Wednesday 18 February 2026 03:31:47 +0000 (0:00:00.253) 0:00:31.657 **** 2026-02-18 03:32:26.352221 | orchestrator | 2026-02-18 03:32:26.352229 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 03:32:26.352236 | orchestrator | Wednesday 18 February 2026 03:31:47 +0000 (0:00:00.063) 0:00:31.721 **** 2026-02-18 03:32:26.352244 | orchestrator | 2026-02-18 03:32:26.352252 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 03:32:26.352259 | orchestrator | Wednesday 18 February 2026 03:31:47 +0000 (0:00:00.066) 0:00:31.788 **** 2026-02-18 03:32:26.352267 | orchestrator | 2026-02-18 03:32:26.352274 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 03:32:26.352282 | orchestrator | Wednesday 18 February 2026 03:31:47 +0000 (0:00:00.089) 0:00:31.877 **** 2026-02-18 03:32:26.352290 | orchestrator | 2026-02-18 03:32:26.352298 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 03:32:26.352305 | orchestrator | Wednesday 18 February 2026 03:31:47 +0000 (0:00:00.066) 0:00:31.944 **** 2026-02-18 03:32:26.352313 | orchestrator | 2026-02-18 03:32:26.352320 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-18 03:32:26.352329 | orchestrator | Wednesday 18 February 2026 03:31:47 +0000 (0:00:00.064) 0:00:32.009 **** 2026-02-18 03:32:26.352336 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:26.352345 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:32:26.352353 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:26.352361 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:32:26.352368 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:26.352376 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:32:26.352384 | orchestrator | 2026-02-18 03:32:26.352391 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-18 03:32:26.352399 | orchestrator | Wednesday 18 February 2026 03:31:49 +0000 (0:00:01.623) 0:00:33.632 **** 2026-02-18 03:32:26.352415 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:32:26.352424 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:32:26.352431 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:32:26.352439 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:32:26.352447 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:32:26.352454 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:32:26.352462 | orchestrator | 2026-02-18 03:32:26.352470 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-18 03:32:26.352477 | orchestrator | 2026-02-18 03:32:26.352485 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-18 03:32:26.352493 | orchestrator | Wednesday 18 February 2026 03:32:23 +0000 (0:00:34.963) 0:01:08.595 **** 2026-02-18 03:32:26.352501 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:32:26.352508 | orchestrator | 2026-02-18 03:32:26.352516 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-18 03:32:26.352524 | orchestrator | Wednesday 18 February 2026 03:32:24 +0000 (0:00:00.776) 0:01:09.372 **** 2026-02-18 03:32:26.352531 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:32:26.352539 | orchestrator | 2026-02-18 03:32:26.352547 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-18 03:32:26.352555 | orchestrator | Wednesday 18 February 2026 03:32:25 +0000 (0:00:00.649) 0:01:10.022 **** 2026-02-18 03:32:26.352562 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:26.352570 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:26.352578 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:26.352585 | orchestrator | 2026-02-18 03:32:26.352593 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-18 03:32:26.352606 | orchestrator | Wednesday 18 February 2026 03:32:26 +0000 (0:00:00.940) 0:01:10.962 **** 2026-02-18 03:32:37.372122 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:37.372265 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:37.372293 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:37.372315 | orchestrator | 2026-02-18 03:32:37.372330 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-18 03:32:37.372361 | orchestrator | Wednesday 18 February 2026 03:32:26 +0000 (0:00:00.353) 0:01:11.316 **** 2026-02-18 03:32:37.372372 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:37.372383 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:37.372394 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:37.372405 | orchestrator | 2026-02-18 03:32:37.372416 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-18 03:32:37.372427 | orchestrator | Wednesday 18 February 2026 03:32:27 +0000 (0:00:00.342) 0:01:11.659 **** 2026-02-18 03:32:37.372437 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:37.372448 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:37.372459 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:37.372469 | orchestrator | 2026-02-18 03:32:37.372480 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-18 03:32:37.372491 | orchestrator | Wednesday 18 February 2026 03:32:27 +0000 (0:00:00.332) 0:01:11.991 **** 2026-02-18 03:32:37.372502 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:37.372513 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:37.372529 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:37.372547 | orchestrator | 2026-02-18 03:32:37.372565 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-18 03:32:37.372583 | orchestrator | Wednesday 18 February 2026 03:32:27 +0000 (0:00:00.565) 0:01:12.557 **** 2026-02-18 03:32:37.372603 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.372622 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.372643 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.372662 | orchestrator | 2026-02-18 03:32:37.372682 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-18 03:32:37.372719 | orchestrator | Wednesday 18 February 2026 03:32:28 +0000 (0:00:00.326) 0:01:12.884 **** 2026-02-18 03:32:37.372733 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.372745 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.372758 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.372770 | orchestrator | 2026-02-18 03:32:37.372782 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-18 03:32:37.372794 | orchestrator | Wednesday 18 February 2026 03:32:28 +0000 (0:00:00.310) 0:01:13.194 **** 2026-02-18 03:32:37.372805 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.372816 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.372826 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.372837 | orchestrator | 2026-02-18 03:32:37.372848 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-18 03:32:37.372858 | orchestrator | Wednesday 18 February 2026 03:32:28 +0000 (0:00:00.287) 0:01:13.481 **** 2026-02-18 03:32:37.372869 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.372880 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.372890 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.372905 | orchestrator | 2026-02-18 03:32:37.372924 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-18 03:32:37.372941 | orchestrator | Wednesday 18 February 2026 03:32:29 +0000 (0:00:00.314) 0:01:13.796 **** 2026-02-18 03:32:37.372959 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.372978 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.372997 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373017 | orchestrator | 2026-02-18 03:32:37.373064 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-18 03:32:37.373084 | orchestrator | Wednesday 18 February 2026 03:32:29 +0000 (0:00:00.427) 0:01:14.224 **** 2026-02-18 03:32:37.373103 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373122 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373140 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373152 | orchestrator | 2026-02-18 03:32:37.373163 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-18 03:32:37.373174 | orchestrator | Wednesday 18 February 2026 03:32:29 +0000 (0:00:00.282) 0:01:14.506 **** 2026-02-18 03:32:37.373185 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373196 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373206 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373217 | orchestrator | 2026-02-18 03:32:37.373227 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-18 03:32:37.373238 | orchestrator | Wednesday 18 February 2026 03:32:30 +0000 (0:00:00.287) 0:01:14.794 **** 2026-02-18 03:32:37.373249 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373259 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373271 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373290 | orchestrator | 2026-02-18 03:32:37.373308 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-18 03:32:37.373326 | orchestrator | Wednesday 18 February 2026 03:32:30 +0000 (0:00:00.287) 0:01:15.081 **** 2026-02-18 03:32:37.373344 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373397 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373418 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373429 | orchestrator | 2026-02-18 03:32:37.373440 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-18 03:32:37.373451 | orchestrator | Wednesday 18 February 2026 03:32:30 +0000 (0:00:00.417) 0:01:15.499 **** 2026-02-18 03:32:37.373461 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373472 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373482 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373493 | orchestrator | 2026-02-18 03:32:37.373504 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-18 03:32:37.373526 | orchestrator | Wednesday 18 February 2026 03:32:31 +0000 (0:00:00.268) 0:01:15.768 **** 2026-02-18 03:32:37.373537 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373547 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373558 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373569 | orchestrator | 2026-02-18 03:32:37.373579 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-18 03:32:37.373590 | orchestrator | Wednesday 18 February 2026 03:32:31 +0000 (0:00:00.297) 0:01:16.065 **** 2026-02-18 03:32:37.373621 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373633 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373647 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373666 | orchestrator | 2026-02-18 03:32:37.373683 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-18 03:32:37.373710 | orchestrator | Wednesday 18 February 2026 03:32:31 +0000 (0:00:00.269) 0:01:16.335 **** 2026-02-18 03:32:37.373729 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:32:37.373748 | orchestrator | 2026-02-18 03:32:37.373766 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-18 03:32:37.373785 | orchestrator | Wednesday 18 February 2026 03:32:32 +0000 (0:00:00.718) 0:01:17.054 **** 2026-02-18 03:32:37.373801 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:37.373811 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:37.373822 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:37.373833 | orchestrator | 2026-02-18 03:32:37.373844 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-18 03:32:37.373854 | orchestrator | Wednesday 18 February 2026 03:32:32 +0000 (0:00:00.455) 0:01:17.509 **** 2026-02-18 03:32:37.373865 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:32:37.373876 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:32:37.373887 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:32:37.373897 | orchestrator | 2026-02-18 03:32:37.373908 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-18 03:32:37.373919 | orchestrator | Wednesday 18 February 2026 03:32:33 +0000 (0:00:00.489) 0:01:17.998 **** 2026-02-18 03:32:37.373930 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.373940 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.373951 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.373962 | orchestrator | 2026-02-18 03:32:37.373972 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-18 03:32:37.373983 | orchestrator | Wednesday 18 February 2026 03:32:33 +0000 (0:00:00.331) 0:01:18.330 **** 2026-02-18 03:32:37.373993 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.374004 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.374015 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.374162 | orchestrator | 2026-02-18 03:32:37.374184 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-18 03:32:37.374204 | orchestrator | Wednesday 18 February 2026 03:32:34 +0000 (0:00:00.576) 0:01:18.907 **** 2026-02-18 03:32:37.374216 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.374227 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.374238 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.374248 | orchestrator | 2026-02-18 03:32:37.374259 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-18 03:32:37.374269 | orchestrator | Wednesday 18 February 2026 03:32:34 +0000 (0:00:00.334) 0:01:19.241 **** 2026-02-18 03:32:37.374280 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.374290 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.374301 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.374311 | orchestrator | 2026-02-18 03:32:37.374322 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-18 03:32:37.374333 | orchestrator | Wednesday 18 February 2026 03:32:35 +0000 (0:00:00.418) 0:01:19.660 **** 2026-02-18 03:32:37.374358 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.374369 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.374379 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.374390 | orchestrator | 2026-02-18 03:32:37.374401 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-18 03:32:37.374411 | orchestrator | Wednesday 18 February 2026 03:32:35 +0000 (0:00:00.343) 0:01:20.003 **** 2026-02-18 03:32:37.374422 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:32:37.374439 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:32:37.374457 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:32:37.374475 | orchestrator | 2026-02-18 03:32:37.374494 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-18 03:32:37.374512 | orchestrator | Wednesday 18 February 2026 03:32:35 +0000 (0:00:00.558) 0:01:20.562 **** 2026-02-18 03:32:37.374534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:37.374556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:37.374576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:37.374607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656660 | orchestrator | 2026-02-18 03:32:43.656673 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-18 03:32:43.656686 | orchestrator | Wednesday 18 February 2026 03:32:37 +0000 (0:00:01.421) 0:01:21.983 **** 2026-02-18 03:32:43.656699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656846 | orchestrator | 2026-02-18 03:32:43.656857 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-18 03:32:43.656868 | orchestrator | Wednesday 18 February 2026 03:32:41 +0000 (0:00:03.876) 0:01:25.860 **** 2026-02-18 03:32:43.656880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:32:43.656950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.450857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.450972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.450984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.450992 | orchestrator | 2026-02-18 03:33:07.451001 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 03:33:07.451009 | orchestrator | Wednesday 18 February 2026 03:32:43 +0000 (0:00:01.985) 0:01:27.845 **** 2026-02-18 03:33:07.451017 | orchestrator | 2026-02-18 03:33:07.451024 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 03:33:07.451031 | orchestrator | Wednesday 18 February 2026 03:32:43 +0000 (0:00:00.077) 0:01:27.923 **** 2026-02-18 03:33:07.451038 | orchestrator | 2026-02-18 03:33:07.451045 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 03:33:07.451135 | orchestrator | Wednesday 18 February 2026 03:32:43 +0000 (0:00:00.268) 0:01:28.192 **** 2026-02-18 03:33:07.451143 | orchestrator | 2026-02-18 03:33:07.451150 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-18 03:33:07.451157 | orchestrator | Wednesday 18 February 2026 03:32:43 +0000 (0:00:00.067) 0:01:28.259 **** 2026-02-18 03:33:07.451164 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:33:07.451173 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:33:07.451180 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:33:07.451187 | orchestrator | 2026-02-18 03:33:07.451194 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-18 03:33:07.451201 | orchestrator | Wednesday 18 February 2026 03:32:46 +0000 (0:00:02.512) 0:01:30.772 **** 2026-02-18 03:33:07.451208 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:33:07.451215 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:33:07.451222 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:33:07.451229 | orchestrator | 2026-02-18 03:33:07.451236 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-18 03:33:07.451244 | orchestrator | Wednesday 18 February 2026 03:32:53 +0000 (0:00:07.602) 0:01:38.375 **** 2026-02-18 03:33:07.451251 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:33:07.451258 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:33:07.451265 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:33:07.451272 | orchestrator | 2026-02-18 03:33:07.451279 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-18 03:33:07.451286 | orchestrator | Wednesday 18 February 2026 03:33:00 +0000 (0:00:06.632) 0:01:45.008 **** 2026-02-18 03:33:07.451293 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:33:07.451301 | orchestrator | 2026-02-18 03:33:07.451308 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-18 03:33:07.451315 | orchestrator | Wednesday 18 February 2026 03:33:00 +0000 (0:00:00.125) 0:01:45.134 **** 2026-02-18 03:33:07.451322 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:07.451330 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:07.451337 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:07.451345 | orchestrator | 2026-02-18 03:33:07.451352 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-18 03:33:07.451359 | orchestrator | Wednesday 18 February 2026 03:33:01 +0000 (0:00:01.017) 0:01:46.151 **** 2026-02-18 03:33:07.451366 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:33:07.451380 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:33:07.451388 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:33:07.451395 | orchestrator | 2026-02-18 03:33:07.451403 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-18 03:33:07.451411 | orchestrator | Wednesday 18 February 2026 03:33:02 +0000 (0:00:00.658) 0:01:46.809 **** 2026-02-18 03:33:07.451420 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:07.451428 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:07.451436 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:07.451445 | orchestrator | 2026-02-18 03:33:07.451453 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-18 03:33:07.451474 | orchestrator | Wednesday 18 February 2026 03:33:02 +0000 (0:00:00.767) 0:01:47.577 **** 2026-02-18 03:33:07.451483 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:33:07.451491 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:33:07.451500 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:33:07.451508 | orchestrator | 2026-02-18 03:33:07.451516 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-18 03:33:07.451524 | orchestrator | Wednesday 18 February 2026 03:33:03 +0000 (0:00:00.677) 0:01:48.255 **** 2026-02-18 03:33:07.451533 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:07.451541 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:07.451563 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:07.451572 | orchestrator | 2026-02-18 03:33:07.451581 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-18 03:33:07.451589 | orchestrator | Wednesday 18 February 2026 03:33:04 +0000 (0:00:01.268) 0:01:49.523 **** 2026-02-18 03:33:07.451598 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:07.451607 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:07.451615 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:07.451624 | orchestrator | 2026-02-18 03:33:07.451632 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-18 03:33:07.451641 | orchestrator | Wednesday 18 February 2026 03:33:05 +0000 (0:00:00.748) 0:01:50.271 **** 2026-02-18 03:33:07.451648 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:07.451655 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:07.451662 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:07.451669 | orchestrator | 2026-02-18 03:33:07.451676 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-18 03:33:07.451685 | orchestrator | Wednesday 18 February 2026 03:33:05 +0000 (0:00:00.347) 0:01:50.618 **** 2026-02-18 03:33:07.451701 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451743 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451764 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451778 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451791 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451809 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:07.451830 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683180 | orchestrator | 2026-02-18 03:33:14.683274 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-18 03:33:14.683294 | orchestrator | Wednesday 18 February 2026 03:33:07 +0000 (0:00:01.443) 0:01:52.062 **** 2026-02-18 03:33:14.683307 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683316 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683321 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683327 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683363 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683388 | orchestrator | 2026-02-18 03:33:14.683393 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-18 03:33:14.683398 | orchestrator | Wednesday 18 February 2026 03:33:11 +0000 (0:00:03.916) 0:01:55.979 **** 2026-02-18 03:33:14.683416 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683421 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683426 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683451 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 03:33:14.683469 | orchestrator | 2026-02-18 03:33:14.683474 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 03:33:14.683479 | orchestrator | Wednesday 18 February 2026 03:33:14 +0000 (0:00:03.104) 0:01:59.084 **** 2026-02-18 03:33:14.683484 | orchestrator | 2026-02-18 03:33:14.683488 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 03:33:14.683493 | orchestrator | Wednesday 18 February 2026 03:33:14 +0000 (0:00:00.063) 0:01:59.147 **** 2026-02-18 03:33:14.683498 | orchestrator | 2026-02-18 03:33:14.683503 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 03:33:14.683507 | orchestrator | Wednesday 18 February 2026 03:33:14 +0000 (0:00:00.068) 0:01:59.215 **** 2026-02-18 03:33:14.683512 | orchestrator | 2026-02-18 03:33:14.683528 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-18 03:33:38.839439 | orchestrator | Wednesday 18 February 2026 03:33:14 +0000 (0:00:00.069) 0:01:59.285 **** 2026-02-18 03:33:38.839584 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:33:38.839616 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:33:38.839636 | orchestrator | 2026-02-18 03:33:38.839656 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-18 03:33:38.839675 | orchestrator | Wednesday 18 February 2026 03:33:20 +0000 (0:00:06.168) 0:02:05.454 **** 2026-02-18 03:33:38.839695 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:33:38.839713 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:33:38.839733 | orchestrator | 2026-02-18 03:33:38.839752 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-18 03:33:38.839805 | orchestrator | Wednesday 18 February 2026 03:33:26 +0000 (0:00:06.174) 0:02:11.628 **** 2026-02-18 03:33:38.839824 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:33:38.839843 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:33:38.839861 | orchestrator | 2026-02-18 03:33:38.839881 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-18 03:33:38.839900 | orchestrator | Wednesday 18 February 2026 03:33:33 +0000 (0:00:06.216) 0:02:17.845 **** 2026-02-18 03:33:38.839919 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:33:38.839939 | orchestrator | 2026-02-18 03:33:38.839987 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-18 03:33:38.840007 | orchestrator | Wednesday 18 February 2026 03:33:33 +0000 (0:00:00.127) 0:02:17.972 **** 2026-02-18 03:33:38.840019 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:38.840031 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:38.840041 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:38.840052 | orchestrator | 2026-02-18 03:33:38.840063 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-18 03:33:38.840074 | orchestrator | Wednesday 18 February 2026 03:33:34 +0000 (0:00:01.033) 0:02:19.005 **** 2026-02-18 03:33:38.840085 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:33:38.840095 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:33:38.840106 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:33:38.840116 | orchestrator | 2026-02-18 03:33:38.840127 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-18 03:33:38.840138 | orchestrator | Wednesday 18 February 2026 03:33:35 +0000 (0:00:00.679) 0:02:19.684 **** 2026-02-18 03:33:38.840149 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:38.840160 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:38.840170 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:38.840181 | orchestrator | 2026-02-18 03:33:38.840192 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-18 03:33:38.840202 | orchestrator | Wednesday 18 February 2026 03:33:35 +0000 (0:00:00.796) 0:02:20.481 **** 2026-02-18 03:33:38.840213 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:33:38.840223 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:33:38.840234 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:33:38.840244 | orchestrator | 2026-02-18 03:33:38.840255 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-18 03:33:38.840266 | orchestrator | Wednesday 18 February 2026 03:33:36 +0000 (0:00:00.653) 0:02:21.135 **** 2026-02-18 03:33:38.840276 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:38.840287 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:38.840297 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:38.840308 | orchestrator | 2026-02-18 03:33:38.840319 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-18 03:33:38.840329 | orchestrator | Wednesday 18 February 2026 03:33:37 +0000 (0:00:01.048) 0:02:22.183 **** 2026-02-18 03:33:38.840340 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:33:38.840351 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:33:38.840361 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:33:38.840372 | orchestrator | 2026-02-18 03:33:38.840382 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:33:38.840394 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-18 03:33:38.840407 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-18 03:33:38.840417 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-18 03:33:38.840428 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:33:38.840450 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:33:38.840461 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:33:38.840472 | orchestrator | 2026-02-18 03:33:38.840482 | orchestrator | 2026-02-18 03:33:38.840507 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:33:38.840519 | orchestrator | Wednesday 18 February 2026 03:33:38 +0000 (0:00:00.861) 0:02:23.045 **** 2026-02-18 03:33:38.840529 | orchestrator | =============================================================================== 2026-02-18 03:33:38.840540 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.96s 2026-02-18 03:33:38.840550 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.97s 2026-02-18 03:33:38.840561 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.78s 2026-02-18 03:33:38.840575 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 12.85s 2026-02-18 03:33:38.840593 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.68s 2026-02-18 03:33:38.840635 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.92s 2026-02-18 03:33:38.840655 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2026-02-18 03:33:38.840673 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.10s 2026-02-18 03:33:38.840686 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2026-02-18 03:33:38.840697 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.99s 2026-02-18 03:33:38.840708 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.62s 2026-02-18 03:33:38.840718 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.56s 2026-02-18 03:33:38.840729 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2026-02-18 03:33:38.840739 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.44s 2026-02-18 03:33:38.840749 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-02-18 03:33:38.840760 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.41s 2026-02-18 03:33:38.840770 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.27s 2026-02-18 03:33:38.840781 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-02-18 03:33:38.840792 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.18s 2026-02-18 03:33:38.840802 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.15s 2026-02-18 03:33:39.201535 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-18 03:33:39.201630 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-18 03:33:41.519384 | orchestrator | 2026-02-18 03:33:41 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-18 03:33:51.724143 | orchestrator | 2026-02-18 03:33:51 | INFO  | Task 005b47eb-ed58-4f92-9c6d-4b5686ae1f59 (wipe-partitions) was prepared for execution. 2026-02-18 03:33:51.724219 | orchestrator | 2026-02-18 03:33:51 | INFO  | It takes a moment until task 005b47eb-ed58-4f92-9c6d-4b5686ae1f59 (wipe-partitions) has been started and output is visible here. 2026-02-18 03:34:04.907911 | orchestrator | 2026-02-18 03:34:04.908046 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-18 03:34:04.908074 | orchestrator | 2026-02-18 03:34:04.908094 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-18 03:34:04.908114 | orchestrator | Wednesday 18 February 2026 03:33:56 +0000 (0:00:00.135) 0:00:00.135 **** 2026-02-18 03:34:04.908172 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:34:04.908194 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:34:04.908213 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:34:04.908233 | orchestrator | 2026-02-18 03:34:04.908254 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-18 03:34:04.908273 | orchestrator | Wednesday 18 February 2026 03:33:56 +0000 (0:00:00.649) 0:00:00.785 **** 2026-02-18 03:34:04.908292 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:04.908311 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:34:04.908330 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:34:04.908350 | orchestrator | 2026-02-18 03:34:04.908371 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-18 03:34:04.908392 | orchestrator | Wednesday 18 February 2026 03:33:57 +0000 (0:00:00.382) 0:00:01.167 **** 2026-02-18 03:34:04.908412 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:34:04.908433 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:34:04.908452 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:34:04.908473 | orchestrator | 2026-02-18 03:34:04.908494 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-18 03:34:04.908514 | orchestrator | Wednesday 18 February 2026 03:33:57 +0000 (0:00:00.603) 0:00:01.771 **** 2026-02-18 03:34:04.908532 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:04.908569 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:34:04.908592 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:34:04.908613 | orchestrator | 2026-02-18 03:34:04.908633 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-18 03:34:04.908652 | orchestrator | Wednesday 18 February 2026 03:33:58 +0000 (0:00:00.288) 0:00:02.059 **** 2026-02-18 03:34:04.908672 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-18 03:34:04.908692 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-18 03:34:04.908713 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-18 03:34:04.908732 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-18 03:34:04.908752 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-18 03:34:04.908800 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-18 03:34:04.908838 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-18 03:34:04.908858 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-18 03:34:04.908876 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-18 03:34:04.908895 | orchestrator | 2026-02-18 03:34:04.908933 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-18 03:34:04.908966 | orchestrator | Wednesday 18 February 2026 03:33:59 +0000 (0:00:01.283) 0:00:03.343 **** 2026-02-18 03:34:04.908985 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-18 03:34:04.909003 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-18 03:34:04.909021 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-18 03:34:04.909038 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-18 03:34:04.909057 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-18 03:34:04.909076 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-18 03:34:04.909094 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-18 03:34:04.909112 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-18 03:34:04.909131 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-18 03:34:04.909148 | orchestrator | 2026-02-18 03:34:04.909167 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-18 03:34:04.909185 | orchestrator | Wednesday 18 February 2026 03:34:01 +0000 (0:00:01.624) 0:00:04.968 **** 2026-02-18 03:34:04.909204 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-18 03:34:04.909222 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-18 03:34:04.909240 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-18 03:34:04.909258 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-18 03:34:04.909289 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-18 03:34:04.909309 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-18 03:34:04.909327 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-18 03:34:04.909345 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-18 03:34:04.909364 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-18 03:34:04.909381 | orchestrator | 2026-02-18 03:34:04.909399 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-18 03:34:04.909417 | orchestrator | Wednesday 18 February 2026 03:34:03 +0000 (0:00:02.192) 0:00:07.161 **** 2026-02-18 03:34:04.909436 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:34:04.909455 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:34:04.909473 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:34:04.909491 | orchestrator | 2026-02-18 03:34:04.909508 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-18 03:34:04.909526 | orchestrator | Wednesday 18 February 2026 03:34:03 +0000 (0:00:00.628) 0:00:07.789 **** 2026-02-18 03:34:04.909546 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:34:04.909564 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:34:04.909582 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:34:04.909600 | orchestrator | 2026-02-18 03:34:04.909617 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:34:04.909636 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:04.909656 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:04.909699 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:04.909718 | orchestrator | 2026-02-18 03:34:04.909737 | orchestrator | 2026-02-18 03:34:04.909755 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:34:04.909806 | orchestrator | Wednesday 18 February 2026 03:34:04 +0000 (0:00:00.642) 0:00:08.431 **** 2026-02-18 03:34:04.909826 | orchestrator | =============================================================================== 2026-02-18 03:34:04.909845 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2026-02-18 03:34:04.909863 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.63s 2026-02-18 03:34:04.909882 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2026-02-18 03:34:04.909900 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2026-02-18 03:34:04.909918 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-02-18 03:34:04.909935 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-02-18 03:34:04.909953 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-02-18 03:34:04.909972 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-02-18 03:34:04.909990 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-02-18 03:34:17.463329 | orchestrator | 2026-02-18 03:34:17 | INFO  | Task 271e49ac-e2a6-4285-96ec-776be7a834ad (facts) was prepared for execution. 2026-02-18 03:34:17.463445 | orchestrator | 2026-02-18 03:34:17 | INFO  | It takes a moment until task 271e49ac-e2a6-4285-96ec-776be7a834ad (facts) has been started and output is visible here. 2026-02-18 03:34:31.381647 | orchestrator | 2026-02-18 03:34:31.381762 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-18 03:34:31.381790 | orchestrator | 2026-02-18 03:34:31.381802 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-18 03:34:31.381847 | orchestrator | Wednesday 18 February 2026 03:34:21 +0000 (0:00:00.272) 0:00:00.272 **** 2026-02-18 03:34:31.381858 | orchestrator | ok: [testbed-manager] 2026-02-18 03:34:31.381871 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:34:31.381881 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:34:31.381892 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:34:31.381903 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:34:31.381914 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:34:31.381924 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:34:31.381935 | orchestrator | 2026-02-18 03:34:31.381946 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-18 03:34:31.381957 | orchestrator | Wednesday 18 February 2026 03:34:22 +0000 (0:00:01.148) 0:00:01.420 **** 2026-02-18 03:34:31.381969 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:34:31.381980 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:34:31.381991 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:34:31.382002 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:34:31.382012 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:31.382080 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:34:31.382092 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:34:31.382102 | orchestrator | 2026-02-18 03:34:31.382113 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-18 03:34:31.382124 | orchestrator | 2026-02-18 03:34:31.382135 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 03:34:31.382145 | orchestrator | Wednesday 18 February 2026 03:34:24 +0000 (0:00:01.292) 0:00:02.713 **** 2026-02-18 03:34:31.382156 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:34:31.382169 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:34:31.382182 | orchestrator | ok: [testbed-manager] 2026-02-18 03:34:31.382194 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:34:31.382206 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:34:31.382218 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:34:31.382230 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:34:31.382242 | orchestrator | 2026-02-18 03:34:31.382255 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-18 03:34:31.382267 | orchestrator | 2026-02-18 03:34:31.382279 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-18 03:34:31.382292 | orchestrator | Wednesday 18 February 2026 03:34:30 +0000 (0:00:05.954) 0:00:08.668 **** 2026-02-18 03:34:31.382304 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:34:31.382316 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:34:31.382328 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:34:31.382341 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:34:31.382353 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:31.382365 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:34:31.382377 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:34:31.382389 | orchestrator | 2026-02-18 03:34:31.382401 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:34:31.382413 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:31.382476 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:31.382491 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:31.382504 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:31.382516 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:31.382527 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:31.382548 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:34:31.382559 | orchestrator | 2026-02-18 03:34:31.382570 | orchestrator | 2026-02-18 03:34:31.382580 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:34:31.382591 | orchestrator | Wednesday 18 February 2026 03:34:30 +0000 (0:00:00.634) 0:00:09.302 **** 2026-02-18 03:34:31.382708 | orchestrator | =============================================================================== 2026-02-18 03:34:31.382722 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.95s 2026-02-18 03:34:31.382733 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2026-02-18 03:34:31.382743 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-02-18 03:34:31.382754 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-02-18 03:34:33.872645 | orchestrator | 2026-02-18 03:34:33 | INFO  | Task bef00e30-4f80-4767-be29-c3ad7ec70329 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-18 03:34:33.872794 | orchestrator | 2026-02-18 03:34:33 | INFO  | It takes a moment until task bef00e30-4f80-4767-be29-c3ad7ec70329 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-18 03:34:46.859780 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-18 03:34:46.859896 | orchestrator | 2.16.14 2026-02-18 03:34:46.859913 | orchestrator | 2026-02-18 03:34:46.859924 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-18 03:34:46.859935 | orchestrator | 2026-02-18 03:34:46.859945 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-18 03:34:46.859955 | orchestrator | Wednesday 18 February 2026 03:34:38 +0000 (0:00:00.359) 0:00:00.359 **** 2026-02-18 03:34:46.859965 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 03:34:46.859975 | orchestrator | 2026-02-18 03:34:46.860001 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-18 03:34:46.860011 | orchestrator | Wednesday 18 February 2026 03:34:38 +0000 (0:00:00.252) 0:00:00.612 **** 2026-02-18 03:34:46.860023 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:34:46.860038 | orchestrator | 2026-02-18 03:34:46.860054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860069 | orchestrator | Wednesday 18 February 2026 03:34:39 +0000 (0:00:00.254) 0:00:00.866 **** 2026-02-18 03:34:46.860083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-18 03:34:46.860098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-18 03:34:46.860114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-18 03:34:46.860131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-18 03:34:46.860147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-18 03:34:46.860162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-18 03:34:46.860179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-18 03:34:46.860196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-18 03:34:46.860207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-18 03:34:46.860217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-18 03:34:46.860227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-18 03:34:46.860236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-18 03:34:46.860270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-18 03:34:46.860281 | orchestrator | 2026-02-18 03:34:46.860290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860300 | orchestrator | Wednesday 18 February 2026 03:34:39 +0000 (0:00:00.532) 0:00:01.399 **** 2026-02-18 03:34:46.860309 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860320 | orchestrator | 2026-02-18 03:34:46.860329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860339 | orchestrator | Wednesday 18 February 2026 03:34:39 +0000 (0:00:00.246) 0:00:01.646 **** 2026-02-18 03:34:46.860348 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860357 | orchestrator | 2026-02-18 03:34:46.860367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860377 | orchestrator | Wednesday 18 February 2026 03:34:40 +0000 (0:00:00.230) 0:00:01.877 **** 2026-02-18 03:34:46.860386 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860395 | orchestrator | 2026-02-18 03:34:46.860405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860414 | orchestrator | Wednesday 18 February 2026 03:34:40 +0000 (0:00:00.212) 0:00:02.090 **** 2026-02-18 03:34:46.860423 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860433 | orchestrator | 2026-02-18 03:34:46.860442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860452 | orchestrator | Wednesday 18 February 2026 03:34:40 +0000 (0:00:00.213) 0:00:02.303 **** 2026-02-18 03:34:46.860461 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860470 | orchestrator | 2026-02-18 03:34:46.860480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860489 | orchestrator | Wednesday 18 February 2026 03:34:40 +0000 (0:00:00.221) 0:00:02.525 **** 2026-02-18 03:34:46.860498 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860537 | orchestrator | 2026-02-18 03:34:46.860553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860563 | orchestrator | Wednesday 18 February 2026 03:34:41 +0000 (0:00:00.214) 0:00:02.739 **** 2026-02-18 03:34:46.860572 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860589 | orchestrator | 2026-02-18 03:34:46.860693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860715 | orchestrator | Wednesday 18 February 2026 03:34:41 +0000 (0:00:00.233) 0:00:02.973 **** 2026-02-18 03:34:46.860729 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.860744 | orchestrator | 2026-02-18 03:34:46.860758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860774 | orchestrator | Wednesday 18 February 2026 03:34:41 +0000 (0:00:00.199) 0:00:03.173 **** 2026-02-18 03:34:46.860791 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f) 2026-02-18 03:34:46.860808 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f) 2026-02-18 03:34:46.860822 | orchestrator | 2026-02-18 03:34:46.860836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860875 | orchestrator | Wednesday 18 February 2026 03:34:41 +0000 (0:00:00.455) 0:00:03.629 **** 2026-02-18 03:34:46.860890 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f) 2026-02-18 03:34:46.860906 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f) 2026-02-18 03:34:46.860920 | orchestrator | 2026-02-18 03:34:46.860935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.860949 | orchestrator | Wednesday 18 February 2026 03:34:42 +0000 (0:00:00.745) 0:00:04.374 **** 2026-02-18 03:34:46.860976 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6) 2026-02-18 03:34:46.861007 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6) 2026-02-18 03:34:46.861024 | orchestrator | 2026-02-18 03:34:46.861040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.861054 | orchestrator | Wednesday 18 February 2026 03:34:43 +0000 (0:00:00.750) 0:00:05.124 **** 2026-02-18 03:34:46.861070 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911) 2026-02-18 03:34:46.861086 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911) 2026-02-18 03:34:46.861100 | orchestrator | 2026-02-18 03:34:46.861116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:34:46.861160 | orchestrator | Wednesday 18 February 2026 03:34:44 +0000 (0:00:00.969) 0:00:06.094 **** 2026-02-18 03:34:46.861177 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-18 03:34:46.861193 | orchestrator | 2026-02-18 03:34:46.861208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861224 | orchestrator | Wednesday 18 February 2026 03:34:44 +0000 (0:00:00.400) 0:00:06.494 **** 2026-02-18 03:34:46.861240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-18 03:34:46.861255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-18 03:34:46.861272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-18 03:34:46.861288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-18 03:34:46.861305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-18 03:34:46.861323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-18 03:34:46.861339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-18 03:34:46.861355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-18 03:34:46.861371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-18 03:34:46.861386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-18 03:34:46.861402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-18 03:34:46.861416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-18 03:34:46.861426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-18 03:34:46.861435 | orchestrator | 2026-02-18 03:34:46.861444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861454 | orchestrator | Wednesday 18 February 2026 03:34:45 +0000 (0:00:00.405) 0:00:06.899 **** 2026-02-18 03:34:46.861463 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.861473 | orchestrator | 2026-02-18 03:34:46.861482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861492 | orchestrator | Wednesday 18 February 2026 03:34:45 +0000 (0:00:00.227) 0:00:07.127 **** 2026-02-18 03:34:46.861543 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.861556 | orchestrator | 2026-02-18 03:34:46.861566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861576 | orchestrator | Wednesday 18 February 2026 03:34:45 +0000 (0:00:00.210) 0:00:07.337 **** 2026-02-18 03:34:46.861585 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.861595 | orchestrator | 2026-02-18 03:34:46.861604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861614 | orchestrator | Wednesday 18 February 2026 03:34:45 +0000 (0:00:00.251) 0:00:07.589 **** 2026-02-18 03:34:46.861635 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.861645 | orchestrator | 2026-02-18 03:34:46.861655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861664 | orchestrator | Wednesday 18 February 2026 03:34:46 +0000 (0:00:00.242) 0:00:07.831 **** 2026-02-18 03:34:46.861674 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.861684 | orchestrator | 2026-02-18 03:34:46.861693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861703 | orchestrator | Wednesday 18 February 2026 03:34:46 +0000 (0:00:00.216) 0:00:08.047 **** 2026-02-18 03:34:46.861712 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.861722 | orchestrator | 2026-02-18 03:34:46.861731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:46.861741 | orchestrator | Wednesday 18 February 2026 03:34:46 +0000 (0:00:00.231) 0:00:08.279 **** 2026-02-18 03:34:46.861765 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:46.861775 | orchestrator | 2026-02-18 03:34:46.861798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:55.295534 | orchestrator | Wednesday 18 February 2026 03:34:46 +0000 (0:00:00.277) 0:00:08.557 **** 2026-02-18 03:34:55.295634 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295649 | orchestrator | 2026-02-18 03:34:55.295657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:55.295665 | orchestrator | Wednesday 18 February 2026 03:34:47 +0000 (0:00:00.224) 0:00:08.781 **** 2026-02-18 03:34:55.295680 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-18 03:34:55.295690 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-18 03:34:55.295710 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-18 03:34:55.295717 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-18 03:34:55.295724 | orchestrator | 2026-02-18 03:34:55.295730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:55.295737 | orchestrator | Wednesday 18 February 2026 03:34:48 +0000 (0:00:01.210) 0:00:09.992 **** 2026-02-18 03:34:55.295744 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295751 | orchestrator | 2026-02-18 03:34:55.295757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:55.295764 | orchestrator | Wednesday 18 February 2026 03:34:48 +0000 (0:00:00.219) 0:00:10.211 **** 2026-02-18 03:34:55.295772 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295778 | orchestrator | 2026-02-18 03:34:55.295786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:55.295793 | orchestrator | Wednesday 18 February 2026 03:34:48 +0000 (0:00:00.242) 0:00:10.453 **** 2026-02-18 03:34:55.295799 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295806 | orchestrator | 2026-02-18 03:34:55.295813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:34:55.295821 | orchestrator | Wednesday 18 February 2026 03:34:48 +0000 (0:00:00.222) 0:00:10.676 **** 2026-02-18 03:34:55.295828 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295835 | orchestrator | 2026-02-18 03:34:55.295842 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-18 03:34:55.295850 | orchestrator | Wednesday 18 February 2026 03:34:49 +0000 (0:00:00.233) 0:00:10.910 **** 2026-02-18 03:34:55.295857 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-18 03:34:55.295865 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-18 03:34:55.295873 | orchestrator | 2026-02-18 03:34:55.295877 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-18 03:34:55.295882 | orchestrator | Wednesday 18 February 2026 03:34:49 +0000 (0:00:00.178) 0:00:11.088 **** 2026-02-18 03:34:55.295886 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295891 | orchestrator | 2026-02-18 03:34:55.295895 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-18 03:34:55.295900 | orchestrator | Wednesday 18 February 2026 03:34:49 +0000 (0:00:00.149) 0:00:11.237 **** 2026-02-18 03:34:55.295918 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295923 | orchestrator | 2026-02-18 03:34:55.295927 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-18 03:34:55.295932 | orchestrator | Wednesday 18 February 2026 03:34:49 +0000 (0:00:00.150) 0:00:11.388 **** 2026-02-18 03:34:55.295936 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.295940 | orchestrator | 2026-02-18 03:34:55.295944 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-18 03:34:55.295949 | orchestrator | Wednesday 18 February 2026 03:34:49 +0000 (0:00:00.151) 0:00:11.539 **** 2026-02-18 03:34:55.295953 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:34:55.295958 | orchestrator | 2026-02-18 03:34:55.295962 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-18 03:34:55.295966 | orchestrator | Wednesday 18 February 2026 03:34:49 +0000 (0:00:00.153) 0:00:11.693 **** 2026-02-18 03:34:55.295971 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}}) 2026-02-18 03:34:55.295976 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c707e11d-d3db-5907-b25a-51e31fa350e2'}}) 2026-02-18 03:34:55.295980 | orchestrator | 2026-02-18 03:34:55.295984 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-18 03:34:55.295989 | orchestrator | Wednesday 18 February 2026 03:34:50 +0000 (0:00:00.180) 0:00:11.874 **** 2026-02-18 03:34:55.295994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}})  2026-02-18 03:34:55.296000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c707e11d-d3db-5907-b25a-51e31fa350e2'}})  2026-02-18 03:34:55.296004 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296008 | orchestrator | 2026-02-18 03:34:55.296013 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-18 03:34:55.296017 | orchestrator | Wednesday 18 February 2026 03:34:50 +0000 (0:00:00.382) 0:00:12.257 **** 2026-02-18 03:34:55.296022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}})  2026-02-18 03:34:55.296027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c707e11d-d3db-5907-b25a-51e31fa350e2'}})  2026-02-18 03:34:55.296032 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296037 | orchestrator | 2026-02-18 03:34:55.296042 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-18 03:34:55.296047 | orchestrator | Wednesday 18 February 2026 03:34:50 +0000 (0:00:00.164) 0:00:12.421 **** 2026-02-18 03:34:55.296052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}})  2026-02-18 03:34:55.296069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c707e11d-d3db-5907-b25a-51e31fa350e2'}})  2026-02-18 03:34:55.296074 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296079 | orchestrator | 2026-02-18 03:34:55.296085 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-18 03:34:55.296090 | orchestrator | Wednesday 18 February 2026 03:34:50 +0000 (0:00:00.161) 0:00:12.583 **** 2026-02-18 03:34:55.296095 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:34:55.296100 | orchestrator | 2026-02-18 03:34:55.296104 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-18 03:34:55.296114 | orchestrator | Wednesday 18 February 2026 03:34:51 +0000 (0:00:00.158) 0:00:12.742 **** 2026-02-18 03:34:55.296119 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:34:55.296124 | orchestrator | 2026-02-18 03:34:55.296129 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-18 03:34:55.296134 | orchestrator | Wednesday 18 February 2026 03:34:51 +0000 (0:00:00.155) 0:00:12.898 **** 2026-02-18 03:34:55.296143 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296148 | orchestrator | 2026-02-18 03:34:55.296154 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-18 03:34:55.296159 | orchestrator | Wednesday 18 February 2026 03:34:51 +0000 (0:00:00.132) 0:00:13.031 **** 2026-02-18 03:34:55.296163 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296169 | orchestrator | 2026-02-18 03:34:55.296177 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-18 03:34:55.296184 | orchestrator | Wednesday 18 February 2026 03:34:51 +0000 (0:00:00.178) 0:00:13.209 **** 2026-02-18 03:34:55.296190 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296197 | orchestrator | 2026-02-18 03:34:55.296203 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-18 03:34:55.296210 | orchestrator | Wednesday 18 February 2026 03:34:51 +0000 (0:00:00.152) 0:00:13.361 **** 2026-02-18 03:34:55.296217 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 03:34:55.296224 | orchestrator |  "ceph_osd_devices": { 2026-02-18 03:34:55.296231 | orchestrator |  "sdb": { 2026-02-18 03:34:55.296237 | orchestrator |  "osd_lvm_uuid": "62ce64d1-56ba-5b5c-b13c-8c9d2c247f31" 2026-02-18 03:34:55.296244 | orchestrator |  }, 2026-02-18 03:34:55.296251 | orchestrator |  "sdc": { 2026-02-18 03:34:55.296258 | orchestrator |  "osd_lvm_uuid": "c707e11d-d3db-5907-b25a-51e31fa350e2" 2026-02-18 03:34:55.296265 | orchestrator |  } 2026-02-18 03:34:55.296273 | orchestrator |  } 2026-02-18 03:34:55.296280 | orchestrator | } 2026-02-18 03:34:55.296287 | orchestrator | 2026-02-18 03:34:55.296296 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-18 03:34:55.296300 | orchestrator | Wednesday 18 February 2026 03:34:51 +0000 (0:00:00.166) 0:00:13.528 **** 2026-02-18 03:34:55.296305 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296309 | orchestrator | 2026-02-18 03:34:55.296313 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-18 03:34:55.296317 | orchestrator | Wednesday 18 February 2026 03:34:51 +0000 (0:00:00.161) 0:00:13.690 **** 2026-02-18 03:34:55.296322 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296326 | orchestrator | 2026-02-18 03:34:55.296330 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-18 03:34:55.296334 | orchestrator | Wednesday 18 February 2026 03:34:52 +0000 (0:00:00.162) 0:00:13.853 **** 2026-02-18 03:34:55.296339 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:34:55.296343 | orchestrator | 2026-02-18 03:34:55.296347 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-18 03:34:55.296351 | orchestrator | Wednesday 18 February 2026 03:34:52 +0000 (0:00:00.152) 0:00:14.005 **** 2026-02-18 03:34:55.296356 | orchestrator | changed: [testbed-node-3] => { 2026-02-18 03:34:55.296360 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-18 03:34:55.296365 | orchestrator |  "ceph_osd_devices": { 2026-02-18 03:34:55.296369 | orchestrator |  "sdb": { 2026-02-18 03:34:55.296373 | orchestrator |  "osd_lvm_uuid": "62ce64d1-56ba-5b5c-b13c-8c9d2c247f31" 2026-02-18 03:34:55.296377 | orchestrator |  }, 2026-02-18 03:34:55.296382 | orchestrator |  "sdc": { 2026-02-18 03:34:55.296386 | orchestrator |  "osd_lvm_uuid": "c707e11d-d3db-5907-b25a-51e31fa350e2" 2026-02-18 03:34:55.296390 | orchestrator |  } 2026-02-18 03:34:55.296395 | orchestrator |  }, 2026-02-18 03:34:55.296399 | orchestrator |  "lvm_volumes": [ 2026-02-18 03:34:55.296403 | orchestrator |  { 2026-02-18 03:34:55.296408 | orchestrator |  "data": "osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31", 2026-02-18 03:34:55.296413 | orchestrator |  "data_vg": "ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31" 2026-02-18 03:34:55.296420 | orchestrator |  }, 2026-02-18 03:34:55.296429 | orchestrator |  { 2026-02-18 03:34:55.296439 | orchestrator |  "data": "osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2", 2026-02-18 03:34:55.296469 | orchestrator |  "data_vg": "ceph-c707e11d-d3db-5907-b25a-51e31fa350e2" 2026-02-18 03:34:55.296476 | orchestrator |  } 2026-02-18 03:34:55.296483 | orchestrator |  ] 2026-02-18 03:34:55.296489 | orchestrator |  } 2026-02-18 03:34:55.296496 | orchestrator | } 2026-02-18 03:34:55.296503 | orchestrator | 2026-02-18 03:34:55.296510 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-18 03:34:55.296517 | orchestrator | Wednesday 18 February 2026 03:34:52 +0000 (0:00:00.461) 0:00:14.467 **** 2026-02-18 03:34:55.296524 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 03:34:55.296531 | orchestrator | 2026-02-18 03:34:55.296537 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-18 03:34:55.296544 | orchestrator | 2026-02-18 03:34:55.296551 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-18 03:34:55.296558 | orchestrator | Wednesday 18 February 2026 03:34:54 +0000 (0:00:02.006) 0:00:16.473 **** 2026-02-18 03:34:55.296564 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-18 03:34:55.296571 | orchestrator | 2026-02-18 03:34:55.296579 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-18 03:34:55.296586 | orchestrator | Wednesday 18 February 2026 03:34:55 +0000 (0:00:00.278) 0:00:16.752 **** 2026-02-18 03:34:55.296593 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:34:55.296600 | orchestrator | 2026-02-18 03:34:55.296616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.519597 | orchestrator | Wednesday 18 February 2026 03:34:55 +0000 (0:00:00.246) 0:00:16.999 **** 2026-02-18 03:35:05.519701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-18 03:35:05.519715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-18 03:35:05.519727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-18 03:35:05.519766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-18 03:35:05.519778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-18 03:35:05.519789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-18 03:35:05.519800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-18 03:35:05.519811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-18 03:35:05.519821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-18 03:35:05.519832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-18 03:35:05.519843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-18 03:35:05.519854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-18 03:35:05.519864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-18 03:35:05.519882 | orchestrator | 2026-02-18 03:35:05.519900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.519911 | orchestrator | Wednesday 18 February 2026 03:34:55 +0000 (0:00:00.443) 0:00:17.442 **** 2026-02-18 03:35:05.519925 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.519943 | orchestrator | 2026-02-18 03:35:05.519955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.519965 | orchestrator | Wednesday 18 February 2026 03:34:55 +0000 (0:00:00.240) 0:00:17.683 **** 2026-02-18 03:35:05.519976 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.519987 | orchestrator | 2026-02-18 03:35:05.519997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520008 | orchestrator | Wednesday 18 February 2026 03:34:56 +0000 (0:00:00.237) 0:00:17.920 **** 2026-02-18 03:35:05.520044 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.520058 | orchestrator | 2026-02-18 03:35:05.520071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520083 | orchestrator | Wednesday 18 February 2026 03:34:56 +0000 (0:00:00.243) 0:00:18.164 **** 2026-02-18 03:35:05.520095 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.520107 | orchestrator | 2026-02-18 03:35:05.520119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520131 | orchestrator | Wednesday 18 February 2026 03:34:57 +0000 (0:00:00.674) 0:00:18.838 **** 2026-02-18 03:35:05.520143 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.520156 | orchestrator | 2026-02-18 03:35:05.520168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520180 | orchestrator | Wednesday 18 February 2026 03:34:57 +0000 (0:00:00.261) 0:00:19.100 **** 2026-02-18 03:35:05.520193 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.520206 | orchestrator | 2026-02-18 03:35:05.520218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520230 | orchestrator | Wednesday 18 February 2026 03:34:57 +0000 (0:00:00.236) 0:00:19.337 **** 2026-02-18 03:35:05.520243 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.520255 | orchestrator | 2026-02-18 03:35:05.520267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520280 | orchestrator | Wednesday 18 February 2026 03:34:57 +0000 (0:00:00.212) 0:00:19.549 **** 2026-02-18 03:35:05.520292 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.520305 | orchestrator | 2026-02-18 03:35:05.520317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520329 | orchestrator | Wednesday 18 February 2026 03:34:58 +0000 (0:00:00.233) 0:00:19.783 **** 2026-02-18 03:35:05.520342 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a) 2026-02-18 03:35:05.520362 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a) 2026-02-18 03:35:05.520381 | orchestrator | 2026-02-18 03:35:05.520437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520460 | orchestrator | Wednesday 18 February 2026 03:34:58 +0000 (0:00:00.458) 0:00:20.241 **** 2026-02-18 03:35:05.520478 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3) 2026-02-18 03:35:05.520498 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3) 2026-02-18 03:35:05.520517 | orchestrator | 2026-02-18 03:35:05.520535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520554 | orchestrator | Wednesday 18 February 2026 03:34:59 +0000 (0:00:00.552) 0:00:20.794 **** 2026-02-18 03:35:05.520573 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19) 2026-02-18 03:35:05.520593 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19) 2026-02-18 03:35:05.520613 | orchestrator | 2026-02-18 03:35:05.520633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520677 | orchestrator | Wednesday 18 February 2026 03:34:59 +0000 (0:00:00.484) 0:00:21.278 **** 2026-02-18 03:35:05.520690 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b) 2026-02-18 03:35:05.520700 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b) 2026-02-18 03:35:05.520711 | orchestrator | 2026-02-18 03:35:05.520722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:05.520741 | orchestrator | Wednesday 18 February 2026 03:35:00 +0000 (0:00:00.715) 0:00:21.994 **** 2026-02-18 03:35:05.520752 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-18 03:35:05.520776 | orchestrator | 2026-02-18 03:35:05.520787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.520797 | orchestrator | Wednesday 18 February 2026 03:35:00 +0000 (0:00:00.659) 0:00:22.654 **** 2026-02-18 03:35:05.520808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-18 03:35:05.520819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-18 03:35:05.520829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-18 03:35:05.520840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-18 03:35:05.520851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-18 03:35:05.520861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-18 03:35:05.520872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-18 03:35:05.520882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-18 03:35:05.520893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-18 03:35:05.520904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-18 03:35:05.520915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-18 03:35:05.520926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-18 03:35:05.520936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-18 03:35:05.520947 | orchestrator | 2026-02-18 03:35:05.520958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.520969 | orchestrator | Wednesday 18 February 2026 03:35:01 +0000 (0:00:00.935) 0:00:23.589 **** 2026-02-18 03:35:05.520979 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.520990 | orchestrator | 2026-02-18 03:35:05.521001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521011 | orchestrator | Wednesday 18 February 2026 03:35:02 +0000 (0:00:00.233) 0:00:23.822 **** 2026-02-18 03:35:05.521022 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.521033 | orchestrator | 2026-02-18 03:35:05.521044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521054 | orchestrator | Wednesday 18 February 2026 03:35:02 +0000 (0:00:00.225) 0:00:24.048 **** 2026-02-18 03:35:05.521065 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.521076 | orchestrator | 2026-02-18 03:35:05.521086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521097 | orchestrator | Wednesday 18 February 2026 03:35:02 +0000 (0:00:00.225) 0:00:24.273 **** 2026-02-18 03:35:05.521108 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.521119 | orchestrator | 2026-02-18 03:35:05.521129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521140 | orchestrator | Wednesday 18 February 2026 03:35:02 +0000 (0:00:00.233) 0:00:24.507 **** 2026-02-18 03:35:05.521151 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.521161 | orchestrator | 2026-02-18 03:35:05.521172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521183 | orchestrator | Wednesday 18 February 2026 03:35:03 +0000 (0:00:00.241) 0:00:24.748 **** 2026-02-18 03:35:05.521194 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.521204 | orchestrator | 2026-02-18 03:35:05.521215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521226 | orchestrator | Wednesday 18 February 2026 03:35:03 +0000 (0:00:00.223) 0:00:24.972 **** 2026-02-18 03:35:05.521247 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.521258 | orchestrator | 2026-02-18 03:35:05.521269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521279 | orchestrator | Wednesday 18 February 2026 03:35:03 +0000 (0:00:00.238) 0:00:25.210 **** 2026-02-18 03:35:05.521290 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:05.521301 | orchestrator | 2026-02-18 03:35:05.521311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521322 | orchestrator | Wednesday 18 February 2026 03:35:03 +0000 (0:00:00.246) 0:00:25.456 **** 2026-02-18 03:35:05.521333 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-18 03:35:05.521344 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-18 03:35:05.521355 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-18 03:35:05.521366 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-18 03:35:05.521377 | orchestrator | 2026-02-18 03:35:05.521387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:05.521434 | orchestrator | Wednesday 18 February 2026 03:35:04 +0000 (0:00:01.023) 0:00:26.480 **** 2026-02-18 03:35:05.521452 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.131292 | orchestrator | 2026-02-18 03:35:12.131467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:12.131497 | orchestrator | Wednesday 18 February 2026 03:35:05 +0000 (0:00:00.743) 0:00:27.223 **** 2026-02-18 03:35:12.131509 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.131521 | orchestrator | 2026-02-18 03:35:12.131533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:12.131544 | orchestrator | Wednesday 18 February 2026 03:35:05 +0000 (0:00:00.232) 0:00:27.456 **** 2026-02-18 03:35:12.131573 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.131586 | orchestrator | 2026-02-18 03:35:12.131605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:12.131624 | orchestrator | Wednesday 18 February 2026 03:35:05 +0000 (0:00:00.243) 0:00:27.700 **** 2026-02-18 03:35:12.131642 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.131659 | orchestrator | 2026-02-18 03:35:12.131675 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-18 03:35:12.131692 | orchestrator | Wednesday 18 February 2026 03:35:06 +0000 (0:00:00.242) 0:00:27.942 **** 2026-02-18 03:35:12.131709 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-18 03:35:12.131727 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-18 03:35:12.131744 | orchestrator | 2026-02-18 03:35:12.131761 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-18 03:35:12.131779 | orchestrator | Wednesday 18 February 2026 03:35:06 +0000 (0:00:00.255) 0:00:28.198 **** 2026-02-18 03:35:12.131798 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.131816 | orchestrator | 2026-02-18 03:35:12.131834 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-18 03:35:12.131851 | orchestrator | Wednesday 18 February 2026 03:35:06 +0000 (0:00:00.164) 0:00:28.362 **** 2026-02-18 03:35:12.131867 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.131885 | orchestrator | 2026-02-18 03:35:12.131902 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-18 03:35:12.131919 | orchestrator | Wednesday 18 February 2026 03:35:06 +0000 (0:00:00.150) 0:00:28.513 **** 2026-02-18 03:35:12.131935 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.131952 | orchestrator | 2026-02-18 03:35:12.131969 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-18 03:35:12.131987 | orchestrator | Wednesday 18 February 2026 03:35:06 +0000 (0:00:00.136) 0:00:28.650 **** 2026-02-18 03:35:12.132005 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:35:12.132024 | orchestrator | 2026-02-18 03:35:12.132041 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-18 03:35:12.132059 | orchestrator | Wednesday 18 February 2026 03:35:07 +0000 (0:00:00.137) 0:00:28.787 **** 2026-02-18 03:35:12.132108 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ef111f9-34b8-55e5-9a40-00a35805e906'}}) 2026-02-18 03:35:12.132128 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47b33137-1c4f-52d4-af64-ebc2c48f95b1'}}) 2026-02-18 03:35:12.132147 | orchestrator | 2026-02-18 03:35:12.132165 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-18 03:35:12.132183 | orchestrator | Wednesday 18 February 2026 03:35:07 +0000 (0:00:00.170) 0:00:28.958 **** 2026-02-18 03:35:12.132202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ef111f9-34b8-55e5-9a40-00a35805e906'}})  2026-02-18 03:35:12.132223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47b33137-1c4f-52d4-af64-ebc2c48f95b1'}})  2026-02-18 03:35:12.132241 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.132258 | orchestrator | 2026-02-18 03:35:12.132275 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-18 03:35:12.132294 | orchestrator | Wednesday 18 February 2026 03:35:07 +0000 (0:00:00.162) 0:00:29.121 **** 2026-02-18 03:35:12.132312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ef111f9-34b8-55e5-9a40-00a35805e906'}})  2026-02-18 03:35:12.132331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47b33137-1c4f-52d4-af64-ebc2c48f95b1'}})  2026-02-18 03:35:12.132347 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.132450 | orchestrator | 2026-02-18 03:35:12.132471 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-18 03:35:12.132493 | orchestrator | Wednesday 18 February 2026 03:35:07 +0000 (0:00:00.416) 0:00:29.537 **** 2026-02-18 03:35:12.132513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ef111f9-34b8-55e5-9a40-00a35805e906'}})  2026-02-18 03:35:12.132535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47b33137-1c4f-52d4-af64-ebc2c48f95b1'}})  2026-02-18 03:35:12.132554 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.132574 | orchestrator | 2026-02-18 03:35:12.132596 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-18 03:35:12.132618 | orchestrator | Wednesday 18 February 2026 03:35:08 +0000 (0:00:00.201) 0:00:29.739 **** 2026-02-18 03:35:12.132637 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:35:12.132649 | orchestrator | 2026-02-18 03:35:12.132661 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-18 03:35:12.132674 | orchestrator | Wednesday 18 February 2026 03:35:08 +0000 (0:00:00.165) 0:00:29.905 **** 2026-02-18 03:35:12.132686 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:35:12.132699 | orchestrator | 2026-02-18 03:35:12.132718 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-18 03:35:12.132736 | orchestrator | Wednesday 18 February 2026 03:35:08 +0000 (0:00:00.166) 0:00:30.071 **** 2026-02-18 03:35:12.132783 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.132802 | orchestrator | 2026-02-18 03:35:12.132819 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-18 03:35:12.132834 | orchestrator | Wednesday 18 February 2026 03:35:08 +0000 (0:00:00.153) 0:00:30.224 **** 2026-02-18 03:35:12.132849 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.132866 | orchestrator | 2026-02-18 03:35:12.132883 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-18 03:35:12.132900 | orchestrator | Wednesday 18 February 2026 03:35:08 +0000 (0:00:00.147) 0:00:30.372 **** 2026-02-18 03:35:12.132933 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.132953 | orchestrator | 2026-02-18 03:35:12.132971 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-18 03:35:12.132991 | orchestrator | Wednesday 18 February 2026 03:35:08 +0000 (0:00:00.154) 0:00:30.526 **** 2026-02-18 03:35:12.133017 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 03:35:12.133028 | orchestrator |  "ceph_osd_devices": { 2026-02-18 03:35:12.133039 | orchestrator |  "sdb": { 2026-02-18 03:35:12.133050 | orchestrator |  "osd_lvm_uuid": "8ef111f9-34b8-55e5-9a40-00a35805e906" 2026-02-18 03:35:12.133060 | orchestrator |  }, 2026-02-18 03:35:12.133071 | orchestrator |  "sdc": { 2026-02-18 03:35:12.133081 | orchestrator |  "osd_lvm_uuid": "47b33137-1c4f-52d4-af64-ebc2c48f95b1" 2026-02-18 03:35:12.133092 | orchestrator |  } 2026-02-18 03:35:12.133102 | orchestrator |  } 2026-02-18 03:35:12.133113 | orchestrator | } 2026-02-18 03:35:12.133124 | orchestrator | 2026-02-18 03:35:12.133135 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-18 03:35:12.133146 | orchestrator | Wednesday 18 February 2026 03:35:08 +0000 (0:00:00.157) 0:00:30.684 **** 2026-02-18 03:35:12.133157 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.133168 | orchestrator | 2026-02-18 03:35:12.133178 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-18 03:35:12.133190 | orchestrator | Wednesday 18 February 2026 03:35:09 +0000 (0:00:00.144) 0:00:30.828 **** 2026-02-18 03:35:12.133207 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.133225 | orchestrator | 2026-02-18 03:35:12.133243 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-18 03:35:12.133260 | orchestrator | Wednesday 18 February 2026 03:35:09 +0000 (0:00:00.146) 0:00:30.975 **** 2026-02-18 03:35:12.133279 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:35:12.133299 | orchestrator | 2026-02-18 03:35:12.133315 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-18 03:35:12.133326 | orchestrator | Wednesday 18 February 2026 03:35:09 +0000 (0:00:00.168) 0:00:31.143 **** 2026-02-18 03:35:12.133337 | orchestrator | changed: [testbed-node-4] => { 2026-02-18 03:35:12.133347 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-18 03:35:12.133393 | orchestrator |  "ceph_osd_devices": { 2026-02-18 03:35:12.133410 | orchestrator |  "sdb": { 2026-02-18 03:35:12.133421 | orchestrator |  "osd_lvm_uuid": "8ef111f9-34b8-55e5-9a40-00a35805e906" 2026-02-18 03:35:12.133432 | orchestrator |  }, 2026-02-18 03:35:12.133442 | orchestrator |  "sdc": { 2026-02-18 03:35:12.133453 | orchestrator |  "osd_lvm_uuid": "47b33137-1c4f-52d4-af64-ebc2c48f95b1" 2026-02-18 03:35:12.133464 | orchestrator |  } 2026-02-18 03:35:12.133474 | orchestrator |  }, 2026-02-18 03:35:12.133485 | orchestrator |  "lvm_volumes": [ 2026-02-18 03:35:12.133496 | orchestrator |  { 2026-02-18 03:35:12.133506 | orchestrator |  "data": "osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906", 2026-02-18 03:35:12.133518 | orchestrator |  "data_vg": "ceph-8ef111f9-34b8-55e5-9a40-00a35805e906" 2026-02-18 03:35:12.133528 | orchestrator |  }, 2026-02-18 03:35:12.133553 | orchestrator |  { 2026-02-18 03:35:12.133565 | orchestrator |  "data": "osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1", 2026-02-18 03:35:12.133575 | orchestrator |  "data_vg": "ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1" 2026-02-18 03:35:12.133597 | orchestrator |  } 2026-02-18 03:35:12.133608 | orchestrator |  ] 2026-02-18 03:35:12.133619 | orchestrator |  } 2026-02-18 03:35:12.133630 | orchestrator | } 2026-02-18 03:35:12.133641 | orchestrator | 2026-02-18 03:35:12.133652 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-18 03:35:12.133662 | orchestrator | Wednesday 18 February 2026 03:35:09 +0000 (0:00:00.486) 0:00:31.630 **** 2026-02-18 03:35:12.133673 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-18 03:35:12.133684 | orchestrator | 2026-02-18 03:35:12.133694 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-18 03:35:12.133705 | orchestrator | 2026-02-18 03:35:12.133715 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-18 03:35:12.133736 | orchestrator | Wednesday 18 February 2026 03:35:11 +0000 (0:00:01.234) 0:00:32.865 **** 2026-02-18 03:35:12.133747 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-18 03:35:12.133757 | orchestrator | 2026-02-18 03:35:12.133768 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-18 03:35:12.133779 | orchestrator | Wednesday 18 February 2026 03:35:11 +0000 (0:00:00.279) 0:00:33.144 **** 2026-02-18 03:35:12.133789 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:35:12.133800 | orchestrator | 2026-02-18 03:35:12.133810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:12.133821 | orchestrator | Wednesday 18 February 2026 03:35:11 +0000 (0:00:00.277) 0:00:33.422 **** 2026-02-18 03:35:12.133832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-18 03:35:12.133842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-18 03:35:12.133853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-18 03:35:12.133863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-18 03:35:12.133874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-18 03:35:12.133897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-18 03:35:21.699407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-18 03:35:21.699522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-18 03:35:21.699536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-18 03:35:21.699565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-18 03:35:21.699576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-18 03:35:21.699587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-18 03:35:21.699598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-18 03:35:21.699609 | orchestrator | 2026-02-18 03:35:21.699620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699632 | orchestrator | Wednesday 18 February 2026 03:35:12 +0000 (0:00:00.408) 0:00:33.831 **** 2026-02-18 03:35:21.699643 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699655 | orchestrator | 2026-02-18 03:35:21.699666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699676 | orchestrator | Wednesday 18 February 2026 03:35:12 +0000 (0:00:00.218) 0:00:34.049 **** 2026-02-18 03:35:21.699686 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699697 | orchestrator | 2026-02-18 03:35:21.699707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699718 | orchestrator | Wednesday 18 February 2026 03:35:12 +0000 (0:00:00.219) 0:00:34.268 **** 2026-02-18 03:35:21.699728 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699739 | orchestrator | 2026-02-18 03:35:21.699750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699760 | orchestrator | Wednesday 18 February 2026 03:35:12 +0000 (0:00:00.202) 0:00:34.471 **** 2026-02-18 03:35:21.699771 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699781 | orchestrator | 2026-02-18 03:35:21.699792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699803 | orchestrator | Wednesday 18 February 2026 03:35:13 +0000 (0:00:00.691) 0:00:35.163 **** 2026-02-18 03:35:21.699813 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699824 | orchestrator | 2026-02-18 03:35:21.699834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699845 | orchestrator | Wednesday 18 February 2026 03:35:13 +0000 (0:00:00.263) 0:00:35.426 **** 2026-02-18 03:35:21.699878 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699891 | orchestrator | 2026-02-18 03:35:21.699903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699916 | orchestrator | Wednesday 18 February 2026 03:35:13 +0000 (0:00:00.233) 0:00:35.659 **** 2026-02-18 03:35:21.699928 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699940 | orchestrator | 2026-02-18 03:35:21.699952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.699964 | orchestrator | Wednesday 18 February 2026 03:35:14 +0000 (0:00:00.228) 0:00:35.887 **** 2026-02-18 03:35:21.699976 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.699988 | orchestrator | 2026-02-18 03:35:21.700000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.700012 | orchestrator | Wednesday 18 February 2026 03:35:14 +0000 (0:00:00.217) 0:00:36.105 **** 2026-02-18 03:35:21.700024 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039) 2026-02-18 03:35:21.700037 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039) 2026-02-18 03:35:21.700049 | orchestrator | 2026-02-18 03:35:21.700062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.700074 | orchestrator | Wednesday 18 February 2026 03:35:14 +0000 (0:00:00.456) 0:00:36.562 **** 2026-02-18 03:35:21.700086 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322) 2026-02-18 03:35:21.700098 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322) 2026-02-18 03:35:21.700110 | orchestrator | 2026-02-18 03:35:21.700123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.700135 | orchestrator | Wednesday 18 February 2026 03:35:15 +0000 (0:00:00.496) 0:00:37.059 **** 2026-02-18 03:35:21.700147 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d) 2026-02-18 03:35:21.700159 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d) 2026-02-18 03:35:21.700171 | orchestrator | 2026-02-18 03:35:21.700183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.700195 | orchestrator | Wednesday 18 February 2026 03:35:15 +0000 (0:00:00.493) 0:00:37.552 **** 2026-02-18 03:35:21.700208 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d) 2026-02-18 03:35:21.700221 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d) 2026-02-18 03:35:21.700234 | orchestrator | 2026-02-18 03:35:21.700246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:35:21.700256 | orchestrator | Wednesday 18 February 2026 03:35:16 +0000 (0:00:00.487) 0:00:38.040 **** 2026-02-18 03:35:21.700267 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-18 03:35:21.700277 | orchestrator | 2026-02-18 03:35:21.700288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700356 | orchestrator | Wednesday 18 February 2026 03:35:16 +0000 (0:00:00.371) 0:00:38.412 **** 2026-02-18 03:35:21.700369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-18 03:35:21.700379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-18 03:35:21.700390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-18 03:35:21.700407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-18 03:35:21.700418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-18 03:35:21.700428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-18 03:35:21.700446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-18 03:35:21.700457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-18 03:35:21.700467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-18 03:35:21.700478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-18 03:35:21.700488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-18 03:35:21.700499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-18 03:35:21.700509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-18 03:35:21.700520 | orchestrator | 2026-02-18 03:35:21.700530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700540 | orchestrator | Wednesday 18 February 2026 03:35:17 +0000 (0:00:00.759) 0:00:39.171 **** 2026-02-18 03:35:21.700563 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700574 | orchestrator | 2026-02-18 03:35:21.700585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700595 | orchestrator | Wednesday 18 February 2026 03:35:17 +0000 (0:00:00.234) 0:00:39.405 **** 2026-02-18 03:35:21.700606 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700616 | orchestrator | 2026-02-18 03:35:21.700627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700637 | orchestrator | Wednesday 18 February 2026 03:35:17 +0000 (0:00:00.250) 0:00:39.656 **** 2026-02-18 03:35:21.700648 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700658 | orchestrator | 2026-02-18 03:35:21.700669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700680 | orchestrator | Wednesday 18 February 2026 03:35:18 +0000 (0:00:00.236) 0:00:39.892 **** 2026-02-18 03:35:21.700690 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700701 | orchestrator | 2026-02-18 03:35:21.700711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700722 | orchestrator | Wednesday 18 February 2026 03:35:18 +0000 (0:00:00.248) 0:00:40.141 **** 2026-02-18 03:35:21.700732 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700742 | orchestrator | 2026-02-18 03:35:21.700753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700764 | orchestrator | Wednesday 18 February 2026 03:35:18 +0000 (0:00:00.245) 0:00:40.387 **** 2026-02-18 03:35:21.700774 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700785 | orchestrator | 2026-02-18 03:35:21.700795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700806 | orchestrator | Wednesday 18 February 2026 03:35:18 +0000 (0:00:00.223) 0:00:40.611 **** 2026-02-18 03:35:21.700816 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700826 | orchestrator | 2026-02-18 03:35:21.700837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700848 | orchestrator | Wednesday 18 February 2026 03:35:19 +0000 (0:00:00.220) 0:00:40.831 **** 2026-02-18 03:35:21.700858 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700869 | orchestrator | 2026-02-18 03:35:21.700879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700890 | orchestrator | Wednesday 18 February 2026 03:35:19 +0000 (0:00:00.212) 0:00:41.044 **** 2026-02-18 03:35:21.700900 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-18 03:35:21.700911 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-18 03:35:21.700921 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-18 03:35:21.700932 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-18 03:35:21.700942 | orchestrator | 2026-02-18 03:35:21.700959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.700970 | orchestrator | Wednesday 18 February 2026 03:35:20 +0000 (0:00:00.944) 0:00:41.988 **** 2026-02-18 03:35:21.700980 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.700991 | orchestrator | 2026-02-18 03:35:21.701001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.701012 | orchestrator | Wednesday 18 February 2026 03:35:20 +0000 (0:00:00.222) 0:00:42.211 **** 2026-02-18 03:35:21.701022 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.701033 | orchestrator | 2026-02-18 03:35:21.701043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.701054 | orchestrator | Wednesday 18 February 2026 03:35:20 +0000 (0:00:00.221) 0:00:42.432 **** 2026-02-18 03:35:21.701064 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.701074 | orchestrator | 2026-02-18 03:35:21.701085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:35:21.701096 | orchestrator | Wednesday 18 February 2026 03:35:21 +0000 (0:00:00.753) 0:00:43.186 **** 2026-02-18 03:35:21.701106 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:21.701117 | orchestrator | 2026-02-18 03:35:21.701133 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-18 03:35:26.079809 | orchestrator | Wednesday 18 February 2026 03:35:21 +0000 (0:00:00.217) 0:00:43.403 **** 2026-02-18 03:35:26.079969 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-18 03:35:26.079989 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-18 03:35:26.080001 | orchestrator | 2026-02-18 03:35:26.080013 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-18 03:35:26.080043 | orchestrator | Wednesday 18 February 2026 03:35:21 +0000 (0:00:00.197) 0:00:43.601 **** 2026-02-18 03:35:26.080055 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080067 | orchestrator | 2026-02-18 03:35:26.080078 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-18 03:35:26.080089 | orchestrator | Wednesday 18 February 2026 03:35:22 +0000 (0:00:00.147) 0:00:43.748 **** 2026-02-18 03:35:26.080100 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080111 | orchestrator | 2026-02-18 03:35:26.080122 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-18 03:35:26.080133 | orchestrator | Wednesday 18 February 2026 03:35:22 +0000 (0:00:00.144) 0:00:43.893 **** 2026-02-18 03:35:26.080158 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080169 | orchestrator | 2026-02-18 03:35:26.080180 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-18 03:35:26.080191 | orchestrator | Wednesday 18 February 2026 03:35:22 +0000 (0:00:00.155) 0:00:44.048 **** 2026-02-18 03:35:26.080202 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:35:26.080213 | orchestrator | 2026-02-18 03:35:26.080224 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-18 03:35:26.080235 | orchestrator | Wednesday 18 February 2026 03:35:22 +0000 (0:00:00.144) 0:00:44.192 **** 2026-02-18 03:35:26.080246 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b4fe298a-487d-5630-bf9a-8376c13eb8c3'}}) 2026-02-18 03:35:26.080258 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}}) 2026-02-18 03:35:26.080268 | orchestrator | 2026-02-18 03:35:26.080340 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-18 03:35:26.080353 | orchestrator | Wednesday 18 February 2026 03:35:22 +0000 (0:00:00.186) 0:00:44.379 **** 2026-02-18 03:35:26.080367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b4fe298a-487d-5630-bf9a-8376c13eb8c3'}})  2026-02-18 03:35:26.080381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}})  2026-02-18 03:35:26.080394 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080426 | orchestrator | 2026-02-18 03:35:26.080440 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-18 03:35:26.080452 | orchestrator | Wednesday 18 February 2026 03:35:22 +0000 (0:00:00.168) 0:00:44.547 **** 2026-02-18 03:35:26.080465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b4fe298a-487d-5630-bf9a-8376c13eb8c3'}})  2026-02-18 03:35:26.080477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}})  2026-02-18 03:35:26.080490 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080502 | orchestrator | 2026-02-18 03:35:26.080515 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-18 03:35:26.080527 | orchestrator | Wednesday 18 February 2026 03:35:23 +0000 (0:00:00.201) 0:00:44.749 **** 2026-02-18 03:35:26.080539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b4fe298a-487d-5630-bf9a-8376c13eb8c3'}})  2026-02-18 03:35:26.080552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}})  2026-02-18 03:35:26.080564 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080577 | orchestrator | 2026-02-18 03:35:26.080589 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-18 03:35:26.080602 | orchestrator | Wednesday 18 February 2026 03:35:23 +0000 (0:00:00.150) 0:00:44.900 **** 2026-02-18 03:35:26.080614 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:35:26.080626 | orchestrator | 2026-02-18 03:35:26.080639 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-18 03:35:26.080652 | orchestrator | Wednesday 18 February 2026 03:35:23 +0000 (0:00:00.159) 0:00:45.059 **** 2026-02-18 03:35:26.080663 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:35:26.080673 | orchestrator | 2026-02-18 03:35:26.080684 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-18 03:35:26.080694 | orchestrator | Wednesday 18 February 2026 03:35:23 +0000 (0:00:00.402) 0:00:45.461 **** 2026-02-18 03:35:26.080705 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080715 | orchestrator | 2026-02-18 03:35:26.080726 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-18 03:35:26.080737 | orchestrator | Wednesday 18 February 2026 03:35:23 +0000 (0:00:00.149) 0:00:45.610 **** 2026-02-18 03:35:26.080747 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080758 | orchestrator | 2026-02-18 03:35:26.080769 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-18 03:35:26.080779 | orchestrator | Wednesday 18 February 2026 03:35:24 +0000 (0:00:00.141) 0:00:45.752 **** 2026-02-18 03:35:26.080790 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.080800 | orchestrator | 2026-02-18 03:35:26.080811 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-18 03:35:26.080821 | orchestrator | Wednesday 18 February 2026 03:35:24 +0000 (0:00:00.153) 0:00:45.906 **** 2026-02-18 03:35:26.080832 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 03:35:26.080842 | orchestrator |  "ceph_osd_devices": { 2026-02-18 03:35:26.080853 | orchestrator |  "sdb": { 2026-02-18 03:35:26.080880 | orchestrator |  "osd_lvm_uuid": "b4fe298a-487d-5630-bf9a-8376c13eb8c3" 2026-02-18 03:35:26.080892 | orchestrator |  }, 2026-02-18 03:35:26.080903 | orchestrator |  "sdc": { 2026-02-18 03:35:26.080913 | orchestrator |  "osd_lvm_uuid": "a3fa5e2b-5aa1-58af-bddd-1734a40d2e72" 2026-02-18 03:35:26.080924 | orchestrator |  } 2026-02-18 03:35:26.080935 | orchestrator |  } 2026-02-18 03:35:26.080945 | orchestrator | } 2026-02-18 03:35:26.080956 | orchestrator | 2026-02-18 03:35:26.080974 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-18 03:35:26.080985 | orchestrator | Wednesday 18 February 2026 03:35:24 +0000 (0:00:00.153) 0:00:46.059 **** 2026-02-18 03:35:26.080996 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.081015 | orchestrator | 2026-02-18 03:35:26.081026 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-18 03:35:26.081036 | orchestrator | Wednesday 18 February 2026 03:35:24 +0000 (0:00:00.149) 0:00:46.209 **** 2026-02-18 03:35:26.081047 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.081057 | orchestrator | 2026-02-18 03:35:26.081068 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-18 03:35:26.081078 | orchestrator | Wednesday 18 February 2026 03:35:24 +0000 (0:00:00.157) 0:00:46.367 **** 2026-02-18 03:35:26.081089 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:35:26.081100 | orchestrator | 2026-02-18 03:35:26.081110 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-18 03:35:26.081121 | orchestrator | Wednesday 18 February 2026 03:35:24 +0000 (0:00:00.140) 0:00:46.507 **** 2026-02-18 03:35:26.081131 | orchestrator | changed: [testbed-node-5] => { 2026-02-18 03:35:26.081142 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-18 03:35:26.081153 | orchestrator |  "ceph_osd_devices": { 2026-02-18 03:35:26.081163 | orchestrator |  "sdb": { 2026-02-18 03:35:26.081174 | orchestrator |  "osd_lvm_uuid": "b4fe298a-487d-5630-bf9a-8376c13eb8c3" 2026-02-18 03:35:26.081185 | orchestrator |  }, 2026-02-18 03:35:26.081196 | orchestrator |  "sdc": { 2026-02-18 03:35:26.081206 | orchestrator |  "osd_lvm_uuid": "a3fa5e2b-5aa1-58af-bddd-1734a40d2e72" 2026-02-18 03:35:26.081217 | orchestrator |  } 2026-02-18 03:35:26.081227 | orchestrator |  }, 2026-02-18 03:35:26.081238 | orchestrator |  "lvm_volumes": [ 2026-02-18 03:35:26.081248 | orchestrator |  { 2026-02-18 03:35:26.081259 | orchestrator |  "data": "osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3", 2026-02-18 03:35:26.081294 | orchestrator |  "data_vg": "ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3" 2026-02-18 03:35:26.081306 | orchestrator |  }, 2026-02-18 03:35:26.081317 | orchestrator |  { 2026-02-18 03:35:26.081328 | orchestrator |  "data": "osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72", 2026-02-18 03:35:26.081338 | orchestrator |  "data_vg": "ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72" 2026-02-18 03:35:26.081349 | orchestrator |  } 2026-02-18 03:35:26.081360 | orchestrator |  ] 2026-02-18 03:35:26.081370 | orchestrator |  } 2026-02-18 03:35:26.081381 | orchestrator | } 2026-02-18 03:35:26.081392 | orchestrator | 2026-02-18 03:35:26.081402 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-18 03:35:26.081413 | orchestrator | Wednesday 18 February 2026 03:35:25 +0000 (0:00:00.218) 0:00:46.726 **** 2026-02-18 03:35:26.081424 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-18 03:35:26.081434 | orchestrator | 2026-02-18 03:35:26.081445 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:35:26.081455 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-18 03:35:26.081467 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-18 03:35:26.081478 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-18 03:35:26.081488 | orchestrator | 2026-02-18 03:35:26.081499 | orchestrator | 2026-02-18 03:35:26.081510 | orchestrator | 2026-02-18 03:35:26.081520 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:35:26.081531 | orchestrator | Wednesday 18 February 2026 03:35:26 +0000 (0:00:01.040) 0:00:47.766 **** 2026-02-18 03:35:26.081541 | orchestrator | =============================================================================== 2026-02-18 03:35:26.081552 | orchestrator | Write configuration file ------------------------------------------------ 4.28s 2026-02-18 03:35:26.081580 | orchestrator | Add known partitions to the list of available block devices ------------- 2.10s 2026-02-18 03:35:26.081591 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2026-02-18 03:35:26.081601 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-02-18 03:35:26.081612 | orchestrator | Print configuration data ------------------------------------------------ 1.17s 2026-02-18 03:35:26.081622 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2026-02-18 03:35:26.081633 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-02-18 03:35:26.081643 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-02-18 03:35:26.081654 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2026-02-18 03:35:26.081664 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.78s 2026-02-18 03:35:26.081675 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2026-02-18 03:35:26.081686 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-02-18 03:35:26.081696 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-02-18 03:35:26.081714 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-02-18 03:35:26.545338 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-18 03:35:26.545462 | orchestrator | Set OSD devices config data --------------------------------------------- 0.72s 2026-02-18 03:35:26.545484 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-02-18 03:35:26.545525 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.71s 2026-02-18 03:35:26.545542 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-18 03:35:26.545556 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-18 03:35:49.248979 | orchestrator | 2026-02-18 03:35:49 | INFO  | Task 59f66cb6-54bd-433b-9931-73c0f3e33ed4 (sync inventory) is running in background. Output coming soon. 2026-02-18 03:36:20.231220 | orchestrator | 2026-02-18 03:35:50 | INFO  | Starting group_vars file reorganization 2026-02-18 03:36:20.231343 | orchestrator | 2026-02-18 03:35:50 | INFO  | Moved 0 file(s) to their respective directories 2026-02-18 03:36:20.231357 | orchestrator | 2026-02-18 03:35:50 | INFO  | Group_vars file reorganization completed 2026-02-18 03:36:20.231367 | orchestrator | 2026-02-18 03:35:53 | INFO  | Starting variable preparation from inventory 2026-02-18 03:36:20.231376 | orchestrator | 2026-02-18 03:35:57 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-18 03:36:20.231385 | orchestrator | 2026-02-18 03:35:57 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-18 03:36:20.231391 | orchestrator | 2026-02-18 03:35:57 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-18 03:36:20.231396 | orchestrator | 2026-02-18 03:35:57 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-18 03:36:20.231402 | orchestrator | 2026-02-18 03:35:57 | INFO  | Variable preparation completed 2026-02-18 03:36:20.231408 | orchestrator | 2026-02-18 03:35:58 | INFO  | Starting inventory overwrite handling 2026-02-18 03:36:20.231413 | orchestrator | 2026-02-18 03:35:58 | INFO  | Handling group overwrites in 99-overwrite 2026-02-18 03:36:20.231418 | orchestrator | 2026-02-18 03:35:58 | INFO  | Removing group frr:children from 60-generic 2026-02-18 03:36:20.231423 | orchestrator | 2026-02-18 03:35:58 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-18 03:36:20.231428 | orchestrator | 2026-02-18 03:35:58 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-18 03:36:20.231453 | orchestrator | 2026-02-18 03:35:58 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-18 03:36:20.231458 | orchestrator | 2026-02-18 03:35:58 | INFO  | Handling group overwrites in 20-roles 2026-02-18 03:36:20.231463 | orchestrator | 2026-02-18 03:35:58 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-18 03:36:20.231469 | orchestrator | 2026-02-18 03:35:58 | INFO  | Removed 5 group(s) in total 2026-02-18 03:36:20.231473 | orchestrator | 2026-02-18 03:35:58 | INFO  | Inventory overwrite handling completed 2026-02-18 03:36:20.231479 | orchestrator | 2026-02-18 03:36:00 | INFO  | Starting merge of inventory files 2026-02-18 03:36:20.231484 | orchestrator | 2026-02-18 03:36:00 | INFO  | Inventory files merged successfully 2026-02-18 03:36:20.231489 | orchestrator | 2026-02-18 03:36:05 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-18 03:36:20.231494 | orchestrator | 2026-02-18 03:36:18 | INFO  | Successfully wrote ClusterShell configuration 2026-02-18 03:36:20.231499 | orchestrator | [master 6e53c19] 2026-02-18-03-36 2026-02-18 03:36:20.231505 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-18 03:36:22.827226 | orchestrator | 2026-02-18 03:36:22 | INFO  | Task 351443f7-6ee6-4c40-a814-fc0ff25fe27e (ceph-create-lvm-devices) was prepared for execution. 2026-02-18 03:36:22.827347 | orchestrator | 2026-02-18 03:36:22 | INFO  | It takes a moment until task 351443f7-6ee6-4c40-a814-fc0ff25fe27e (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-18 03:36:35.624112 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-18 03:36:35.624243 | orchestrator | 2.16.14 2026-02-18 03:36:35.624263 | orchestrator | 2026-02-18 03:36:35.624276 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-18 03:36:35.624290 | orchestrator | 2026-02-18 03:36:35.624302 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-18 03:36:35.624314 | orchestrator | Wednesday 18 February 2026 03:36:27 +0000 (0:00:00.318) 0:00:00.318 **** 2026-02-18 03:36:35.624326 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 03:36:35.624337 | orchestrator | 2026-02-18 03:36:35.624349 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-18 03:36:35.624362 | orchestrator | Wednesday 18 February 2026 03:36:27 +0000 (0:00:00.275) 0:00:00.593 **** 2026-02-18 03:36:35.624375 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:35.624389 | orchestrator | 2026-02-18 03:36:35.624402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624415 | orchestrator | Wednesday 18 February 2026 03:36:28 +0000 (0:00:00.247) 0:00:00.841 **** 2026-02-18 03:36:35.624429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-18 03:36:35.624462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-18 03:36:35.624472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-18 03:36:35.624480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-18 03:36:35.624488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-18 03:36:35.624496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-18 03:36:35.624504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-18 03:36:35.624514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-18 03:36:35.624527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-18 03:36:35.624540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-18 03:36:35.624577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-18 03:36:35.624590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-18 03:36:35.624602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-18 03:36:35.624614 | orchestrator | 2026-02-18 03:36:35.624626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624639 | orchestrator | Wednesday 18 February 2026 03:36:28 +0000 (0:00:00.567) 0:00:01.408 **** 2026-02-18 03:36:35.624652 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.624665 | orchestrator | 2026-02-18 03:36:35.624679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624692 | orchestrator | Wednesday 18 February 2026 03:36:28 +0000 (0:00:00.217) 0:00:01.625 **** 2026-02-18 03:36:35.624706 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.624719 | orchestrator | 2026-02-18 03:36:35.624733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624748 | orchestrator | Wednesday 18 February 2026 03:36:29 +0000 (0:00:00.222) 0:00:01.848 **** 2026-02-18 03:36:35.624762 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.624775 | orchestrator | 2026-02-18 03:36:35.624788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624797 | orchestrator | Wednesday 18 February 2026 03:36:29 +0000 (0:00:00.195) 0:00:02.044 **** 2026-02-18 03:36:35.624804 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.624812 | orchestrator | 2026-02-18 03:36:35.624819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624827 | orchestrator | Wednesday 18 February 2026 03:36:29 +0000 (0:00:00.224) 0:00:02.268 **** 2026-02-18 03:36:35.624835 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.624842 | orchestrator | 2026-02-18 03:36:35.624850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624858 | orchestrator | Wednesday 18 February 2026 03:36:29 +0000 (0:00:00.219) 0:00:02.488 **** 2026-02-18 03:36:35.624866 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.624873 | orchestrator | 2026-02-18 03:36:35.624881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.624889 | orchestrator | Wednesday 18 February 2026 03:36:29 +0000 (0:00:00.226) 0:00:02.714 **** 2026-02-18 03:36:35.624896 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625031 | orchestrator | 2026-02-18 03:36:35.625040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.625048 | orchestrator | Wednesday 18 February 2026 03:36:30 +0000 (0:00:00.223) 0:00:02.938 **** 2026-02-18 03:36:35.625055 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625063 | orchestrator | 2026-02-18 03:36:35.625071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.625078 | orchestrator | Wednesday 18 February 2026 03:36:30 +0000 (0:00:00.237) 0:00:03.175 **** 2026-02-18 03:36:35.625086 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f) 2026-02-18 03:36:35.625095 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f) 2026-02-18 03:36:35.625103 | orchestrator | 2026-02-18 03:36:35.625110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.625138 | orchestrator | Wednesday 18 February 2026 03:36:30 +0000 (0:00:00.437) 0:00:03.613 **** 2026-02-18 03:36:35.625147 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f) 2026-02-18 03:36:35.625155 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f) 2026-02-18 03:36:35.625163 | orchestrator | 2026-02-18 03:36:35.625173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.625202 | orchestrator | Wednesday 18 February 2026 03:36:31 +0000 (0:00:00.681) 0:00:04.295 **** 2026-02-18 03:36:35.625215 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6) 2026-02-18 03:36:35.625229 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6) 2026-02-18 03:36:35.625241 | orchestrator | 2026-02-18 03:36:35.625253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.625264 | orchestrator | Wednesday 18 February 2026 03:36:32 +0000 (0:00:00.716) 0:00:05.011 **** 2026-02-18 03:36:35.625275 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911) 2026-02-18 03:36:35.625296 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911) 2026-02-18 03:36:35.625308 | orchestrator | 2026-02-18 03:36:35.625324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:35.625338 | orchestrator | Wednesday 18 February 2026 03:36:33 +0000 (0:00:00.990) 0:00:06.002 **** 2026-02-18 03:36:35.625352 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-18 03:36:35.625364 | orchestrator | 2026-02-18 03:36:35.625378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625390 | orchestrator | Wednesday 18 February 2026 03:36:33 +0000 (0:00:00.365) 0:00:06.368 **** 2026-02-18 03:36:35.625403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-18 03:36:35.625417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-18 03:36:35.625428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-18 03:36:35.625440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-18 03:36:35.625451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-18 03:36:35.625465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-18 03:36:35.625477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-18 03:36:35.625491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-18 03:36:35.625504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-18 03:36:35.625517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-18 03:36:35.625531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-18 03:36:35.625540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-18 03:36:35.625547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-18 03:36:35.625555 | orchestrator | 2026-02-18 03:36:35.625563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625570 | orchestrator | Wednesday 18 February 2026 03:36:34 +0000 (0:00:00.461) 0:00:06.829 **** 2026-02-18 03:36:35.625578 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625586 | orchestrator | 2026-02-18 03:36:35.625593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625601 | orchestrator | Wednesday 18 February 2026 03:36:34 +0000 (0:00:00.220) 0:00:07.050 **** 2026-02-18 03:36:35.625609 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625616 | orchestrator | 2026-02-18 03:36:35.625624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625632 | orchestrator | Wednesday 18 February 2026 03:36:34 +0000 (0:00:00.226) 0:00:07.276 **** 2026-02-18 03:36:35.625639 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625657 | orchestrator | 2026-02-18 03:36:35.625665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625672 | orchestrator | Wednesday 18 February 2026 03:36:34 +0000 (0:00:00.245) 0:00:07.522 **** 2026-02-18 03:36:35.625680 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625688 | orchestrator | 2026-02-18 03:36:35.625695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625703 | orchestrator | Wednesday 18 February 2026 03:36:34 +0000 (0:00:00.206) 0:00:07.729 **** 2026-02-18 03:36:35.625711 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625718 | orchestrator | 2026-02-18 03:36:35.625726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625734 | orchestrator | Wednesday 18 February 2026 03:36:35 +0000 (0:00:00.238) 0:00:07.967 **** 2026-02-18 03:36:35.625741 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625749 | orchestrator | 2026-02-18 03:36:35.625757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:35.625764 | orchestrator | Wednesday 18 February 2026 03:36:35 +0000 (0:00:00.207) 0:00:08.175 **** 2026-02-18 03:36:35.625772 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:35.625780 | orchestrator | 2026-02-18 03:36:35.625798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:44.117841 | orchestrator | Wednesday 18 February 2026 03:36:35 +0000 (0:00:00.211) 0:00:08.386 **** 2026-02-18 03:36:44.117961 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.117971 | orchestrator | 2026-02-18 03:36:44.117979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:44.117986 | orchestrator | Wednesday 18 February 2026 03:36:36 +0000 (0:00:00.702) 0:00:09.089 **** 2026-02-18 03:36:44.117992 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-18 03:36:44.118000 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-18 03:36:44.118006 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-18 03:36:44.118042 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-18 03:36:44.118050 | orchestrator | 2026-02-18 03:36:44.118057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:44.118063 | orchestrator | Wednesday 18 February 2026 03:36:37 +0000 (0:00:00.723) 0:00:09.812 **** 2026-02-18 03:36:44.118069 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118075 | orchestrator | 2026-02-18 03:36:44.118082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:44.118088 | orchestrator | Wednesday 18 February 2026 03:36:37 +0000 (0:00:00.230) 0:00:10.043 **** 2026-02-18 03:36:44.118094 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118101 | orchestrator | 2026-02-18 03:36:44.118119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:44.118126 | orchestrator | Wednesday 18 February 2026 03:36:37 +0000 (0:00:00.230) 0:00:10.274 **** 2026-02-18 03:36:44.118132 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118169 | orchestrator | 2026-02-18 03:36:44.118180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:36:44.118191 | orchestrator | Wednesday 18 February 2026 03:36:37 +0000 (0:00:00.229) 0:00:10.503 **** 2026-02-18 03:36:44.118202 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118212 | orchestrator | 2026-02-18 03:36:44.118224 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-18 03:36:44.118235 | orchestrator | Wednesday 18 February 2026 03:36:37 +0000 (0:00:00.226) 0:00:10.730 **** 2026-02-18 03:36:44.118242 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118248 | orchestrator | 2026-02-18 03:36:44.118254 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-18 03:36:44.118260 | orchestrator | Wednesday 18 February 2026 03:36:38 +0000 (0:00:00.147) 0:00:10.877 **** 2026-02-18 03:36:44.118267 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}}) 2026-02-18 03:36:44.118292 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c707e11d-d3db-5907-b25a-51e31fa350e2'}}) 2026-02-18 03:36:44.118298 | orchestrator | 2026-02-18 03:36:44.118305 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-18 03:36:44.118311 | orchestrator | Wednesday 18 February 2026 03:36:38 +0000 (0:00:00.204) 0:00:11.082 **** 2026-02-18 03:36:44.118318 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}) 2026-02-18 03:36:44.118326 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}) 2026-02-18 03:36:44.118332 | orchestrator | 2026-02-18 03:36:44.118338 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-18 03:36:44.118344 | orchestrator | Wednesday 18 February 2026 03:36:40 +0000 (0:00:02.007) 0:00:13.090 **** 2026-02-18 03:36:44.118351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118364 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118370 | orchestrator | 2026-02-18 03:36:44.118376 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-18 03:36:44.118383 | orchestrator | Wednesday 18 February 2026 03:36:40 +0000 (0:00:00.181) 0:00:13.271 **** 2026-02-18 03:36:44.118389 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}) 2026-02-18 03:36:44.118395 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}) 2026-02-18 03:36:44.118401 | orchestrator | 2026-02-18 03:36:44.118407 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-18 03:36:44.118413 | orchestrator | Wednesday 18 February 2026 03:36:42 +0000 (0:00:01.501) 0:00:14.772 **** 2026-02-18 03:36:44.118423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118433 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118442 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118451 | orchestrator | 2026-02-18 03:36:44.118460 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-18 03:36:44.118471 | orchestrator | Wednesday 18 February 2026 03:36:42 +0000 (0:00:00.163) 0:00:14.935 **** 2026-02-18 03:36:44.118498 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118505 | orchestrator | 2026-02-18 03:36:44.118511 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-18 03:36:44.118517 | orchestrator | Wednesday 18 February 2026 03:36:42 +0000 (0:00:00.366) 0:00:15.302 **** 2026-02-18 03:36:44.118524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118530 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118536 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118542 | orchestrator | 2026-02-18 03:36:44.118548 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-18 03:36:44.118554 | orchestrator | Wednesday 18 February 2026 03:36:42 +0000 (0:00:00.162) 0:00:15.465 **** 2026-02-18 03:36:44.118567 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118573 | orchestrator | 2026-02-18 03:36:44.118579 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-18 03:36:44.118585 | orchestrator | Wednesday 18 February 2026 03:36:42 +0000 (0:00:00.143) 0:00:15.608 **** 2026-02-18 03:36:44.118596 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118609 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118615 | orchestrator | 2026-02-18 03:36:44.118621 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-18 03:36:44.118627 | orchestrator | Wednesday 18 February 2026 03:36:43 +0000 (0:00:00.164) 0:00:15.773 **** 2026-02-18 03:36:44.118633 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118639 | orchestrator | 2026-02-18 03:36:44.118645 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-18 03:36:44.118651 | orchestrator | Wednesday 18 February 2026 03:36:43 +0000 (0:00:00.145) 0:00:15.918 **** 2026-02-18 03:36:44.118657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118670 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118676 | orchestrator | 2026-02-18 03:36:44.118682 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-18 03:36:44.118688 | orchestrator | Wednesday 18 February 2026 03:36:43 +0000 (0:00:00.171) 0:00:16.090 **** 2026-02-18 03:36:44.118694 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:44.118700 | orchestrator | 2026-02-18 03:36:44.118706 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-18 03:36:44.118713 | orchestrator | Wednesday 18 February 2026 03:36:43 +0000 (0:00:00.144) 0:00:16.234 **** 2026-02-18 03:36:44.118719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118731 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118737 | orchestrator | 2026-02-18 03:36:44.118743 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-18 03:36:44.118761 | orchestrator | Wednesday 18 February 2026 03:36:43 +0000 (0:00:00.166) 0:00:16.401 **** 2026-02-18 03:36:44.118774 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118786 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118792 | orchestrator | 2026-02-18 03:36:44.118798 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-18 03:36:44.118805 | orchestrator | Wednesday 18 February 2026 03:36:43 +0000 (0:00:00.164) 0:00:16.566 **** 2026-02-18 03:36:44.118811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:44.118817 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:44.118828 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118834 | orchestrator | 2026-02-18 03:36:44.118840 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-18 03:36:44.118846 | orchestrator | Wednesday 18 February 2026 03:36:43 +0000 (0:00:00.163) 0:00:16.729 **** 2026-02-18 03:36:44.118852 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:44.118898 | orchestrator | 2026-02-18 03:36:44.118905 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-18 03:36:44.118917 | orchestrator | Wednesday 18 February 2026 03:36:44 +0000 (0:00:00.152) 0:00:16.881 **** 2026-02-18 03:36:51.034187 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034297 | orchestrator | 2026-02-18 03:36:51.034313 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-18 03:36:51.034324 | orchestrator | Wednesday 18 February 2026 03:36:44 +0000 (0:00:00.144) 0:00:17.026 **** 2026-02-18 03:36:51.034332 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034341 | orchestrator | 2026-02-18 03:36:51.034350 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-18 03:36:51.034358 | orchestrator | Wednesday 18 February 2026 03:36:44 +0000 (0:00:00.379) 0:00:17.406 **** 2026-02-18 03:36:51.034366 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 03:36:51.034375 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-18 03:36:51.034383 | orchestrator | } 2026-02-18 03:36:51.034391 | orchestrator | 2026-02-18 03:36:51.034399 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-18 03:36:51.034407 | orchestrator | Wednesday 18 February 2026 03:36:44 +0000 (0:00:00.149) 0:00:17.556 **** 2026-02-18 03:36:51.034415 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 03:36:51.034423 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-18 03:36:51.034431 | orchestrator | } 2026-02-18 03:36:51.034439 | orchestrator | 2026-02-18 03:36:51.034446 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-18 03:36:51.034468 | orchestrator | Wednesday 18 February 2026 03:36:44 +0000 (0:00:00.145) 0:00:17.702 **** 2026-02-18 03:36:51.034476 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 03:36:51.034484 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-18 03:36:51.034492 | orchestrator | } 2026-02-18 03:36:51.034500 | orchestrator | 2026-02-18 03:36:51.034508 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-18 03:36:51.034516 | orchestrator | Wednesday 18 February 2026 03:36:45 +0000 (0:00:00.157) 0:00:17.859 **** 2026-02-18 03:36:51.034523 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:51.034531 | orchestrator | 2026-02-18 03:36:51.034539 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-18 03:36:51.034547 | orchestrator | Wednesday 18 February 2026 03:36:45 +0000 (0:00:00.701) 0:00:18.561 **** 2026-02-18 03:36:51.034555 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:51.034562 | orchestrator | 2026-02-18 03:36:51.034570 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-18 03:36:51.034578 | orchestrator | Wednesday 18 February 2026 03:36:46 +0000 (0:00:00.533) 0:00:19.094 **** 2026-02-18 03:36:51.034586 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:51.034617 | orchestrator | 2026-02-18 03:36:51.034626 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-18 03:36:51.034634 | orchestrator | Wednesday 18 February 2026 03:36:46 +0000 (0:00:00.537) 0:00:19.632 **** 2026-02-18 03:36:51.034642 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:51.034650 | orchestrator | 2026-02-18 03:36:51.034657 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-18 03:36:51.034665 | orchestrator | Wednesday 18 February 2026 03:36:47 +0000 (0:00:00.173) 0:00:19.805 **** 2026-02-18 03:36:51.034673 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034681 | orchestrator | 2026-02-18 03:36:51.034689 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-18 03:36:51.034714 | orchestrator | Wednesday 18 February 2026 03:36:47 +0000 (0:00:00.143) 0:00:19.948 **** 2026-02-18 03:36:51.034722 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034730 | orchestrator | 2026-02-18 03:36:51.034738 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-18 03:36:51.034746 | orchestrator | Wednesday 18 February 2026 03:36:47 +0000 (0:00:00.141) 0:00:20.090 **** 2026-02-18 03:36:51.034754 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 03:36:51.034762 | orchestrator |  "vgs_report": { 2026-02-18 03:36:51.034769 | orchestrator |  "vg": [] 2026-02-18 03:36:51.034777 | orchestrator |  } 2026-02-18 03:36:51.034785 | orchestrator | } 2026-02-18 03:36:51.034793 | orchestrator | 2026-02-18 03:36:51.034801 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-18 03:36:51.034809 | orchestrator | Wednesday 18 February 2026 03:36:47 +0000 (0:00:00.186) 0:00:20.276 **** 2026-02-18 03:36:51.034817 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034882 | orchestrator | 2026-02-18 03:36:51.034891 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-18 03:36:51.034899 | orchestrator | Wednesday 18 February 2026 03:36:47 +0000 (0:00:00.142) 0:00:20.418 **** 2026-02-18 03:36:51.034906 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034914 | orchestrator | 2026-02-18 03:36:51.034922 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-18 03:36:51.034930 | orchestrator | Wednesday 18 February 2026 03:36:48 +0000 (0:00:00.371) 0:00:20.790 **** 2026-02-18 03:36:51.034937 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034945 | orchestrator | 2026-02-18 03:36:51.034953 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-18 03:36:51.034960 | orchestrator | Wednesday 18 February 2026 03:36:48 +0000 (0:00:00.143) 0:00:20.934 **** 2026-02-18 03:36:51.034968 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.034976 | orchestrator | 2026-02-18 03:36:51.034983 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-18 03:36:51.034991 | orchestrator | Wednesday 18 February 2026 03:36:48 +0000 (0:00:00.151) 0:00:21.086 **** 2026-02-18 03:36:51.034999 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035006 | orchestrator | 2026-02-18 03:36:51.035014 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-18 03:36:51.035022 | orchestrator | Wednesday 18 February 2026 03:36:48 +0000 (0:00:00.143) 0:00:21.229 **** 2026-02-18 03:36:51.035029 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035037 | orchestrator | 2026-02-18 03:36:51.035045 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-18 03:36:51.035052 | orchestrator | Wednesday 18 February 2026 03:36:48 +0000 (0:00:00.158) 0:00:21.387 **** 2026-02-18 03:36:51.035060 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035068 | orchestrator | 2026-02-18 03:36:51.035075 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-18 03:36:51.035083 | orchestrator | Wednesday 18 February 2026 03:36:48 +0000 (0:00:00.146) 0:00:21.533 **** 2026-02-18 03:36:51.035106 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035114 | orchestrator | 2026-02-18 03:36:51.035122 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-18 03:36:51.035130 | orchestrator | Wednesday 18 February 2026 03:36:48 +0000 (0:00:00.159) 0:00:21.693 **** 2026-02-18 03:36:51.035138 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035146 | orchestrator | 2026-02-18 03:36:51.035153 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-18 03:36:51.035161 | orchestrator | Wednesday 18 February 2026 03:36:49 +0000 (0:00:00.135) 0:00:21.829 **** 2026-02-18 03:36:51.035169 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035176 | orchestrator | 2026-02-18 03:36:51.035184 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-18 03:36:51.035192 | orchestrator | Wednesday 18 February 2026 03:36:49 +0000 (0:00:00.157) 0:00:21.987 **** 2026-02-18 03:36:51.035207 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035215 | orchestrator | 2026-02-18 03:36:51.035223 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-18 03:36:51.035231 | orchestrator | Wednesday 18 February 2026 03:36:49 +0000 (0:00:00.139) 0:00:22.127 **** 2026-02-18 03:36:51.035238 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035246 | orchestrator | 2026-02-18 03:36:51.035259 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-18 03:36:51.035267 | orchestrator | Wednesday 18 February 2026 03:36:49 +0000 (0:00:00.144) 0:00:22.271 **** 2026-02-18 03:36:51.035275 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035283 | orchestrator | 2026-02-18 03:36:51.035291 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-18 03:36:51.035298 | orchestrator | Wednesday 18 February 2026 03:36:49 +0000 (0:00:00.139) 0:00:22.410 **** 2026-02-18 03:36:51.035306 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035314 | orchestrator | 2026-02-18 03:36:51.035321 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-18 03:36:51.035329 | orchestrator | Wednesday 18 February 2026 03:36:50 +0000 (0:00:00.394) 0:00:22.804 **** 2026-02-18 03:36:51.035338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:51.035348 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:51.035356 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035363 | orchestrator | 2026-02-18 03:36:51.035371 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-18 03:36:51.035379 | orchestrator | Wednesday 18 February 2026 03:36:50 +0000 (0:00:00.178) 0:00:22.983 **** 2026-02-18 03:36:51.035387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:51.035395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:51.035403 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035410 | orchestrator | 2026-02-18 03:36:51.035418 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-18 03:36:51.035426 | orchestrator | Wednesday 18 February 2026 03:36:50 +0000 (0:00:00.169) 0:00:23.152 **** 2026-02-18 03:36:51.035432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:51.035439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:51.035446 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035452 | orchestrator | 2026-02-18 03:36:51.035459 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-18 03:36:51.035466 | orchestrator | Wednesday 18 February 2026 03:36:50 +0000 (0:00:00.166) 0:00:23.319 **** 2026-02-18 03:36:51.035472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:51.035479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:51.035486 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035492 | orchestrator | 2026-02-18 03:36:51.035499 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-18 03:36:51.035505 | orchestrator | Wednesday 18 February 2026 03:36:50 +0000 (0:00:00.156) 0:00:23.475 **** 2026-02-18 03:36:51.035517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:51.035524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:51.035530 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:51.035537 | orchestrator | 2026-02-18 03:36:51.035544 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-18 03:36:51.035550 | orchestrator | Wednesday 18 February 2026 03:36:50 +0000 (0:00:00.166) 0:00:23.642 **** 2026-02-18 03:36:51.035562 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:56.734941 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:56.735017 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:56.735023 | orchestrator | 2026-02-18 03:36:56.735029 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-18 03:36:56.735035 | orchestrator | Wednesday 18 February 2026 03:36:51 +0000 (0:00:00.156) 0:00:23.798 **** 2026-02-18 03:36:56.735039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:56.735043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:56.735047 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:56.735051 | orchestrator | 2026-02-18 03:36:56.735065 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-18 03:36:56.735070 | orchestrator | Wednesday 18 February 2026 03:36:51 +0000 (0:00:00.156) 0:00:23.955 **** 2026-02-18 03:36:56.735073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:56.735077 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:56.735081 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:56.735085 | orchestrator | 2026-02-18 03:36:56.735089 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-18 03:36:56.735092 | orchestrator | Wednesday 18 February 2026 03:36:51 +0000 (0:00:00.183) 0:00:24.139 **** 2026-02-18 03:36:56.735096 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:56.735101 | orchestrator | 2026-02-18 03:36:56.735105 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-18 03:36:56.735108 | orchestrator | Wednesday 18 February 2026 03:36:51 +0000 (0:00:00.528) 0:00:24.668 **** 2026-02-18 03:36:56.735112 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:56.735116 | orchestrator | 2026-02-18 03:36:56.735119 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-18 03:36:56.735123 | orchestrator | Wednesday 18 February 2026 03:36:52 +0000 (0:00:00.543) 0:00:25.211 **** 2026-02-18 03:36:56.735127 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:36:56.735130 | orchestrator | 2026-02-18 03:36:56.735134 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-18 03:36:56.735139 | orchestrator | Wednesday 18 February 2026 03:36:52 +0000 (0:00:00.158) 0:00:25.370 **** 2026-02-18 03:36:56.735143 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'vg_name': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}) 2026-02-18 03:36:56.735148 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'vg_name': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}) 2026-02-18 03:36:56.735164 | orchestrator | 2026-02-18 03:36:56.735168 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-18 03:36:56.735172 | orchestrator | Wednesday 18 February 2026 03:36:52 +0000 (0:00:00.179) 0:00:25.550 **** 2026-02-18 03:36:56.735176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:56.735179 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:56.735183 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:56.735187 | orchestrator | 2026-02-18 03:36:56.735191 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-18 03:36:56.735195 | orchestrator | Wednesday 18 February 2026 03:36:53 +0000 (0:00:00.400) 0:00:25.950 **** 2026-02-18 03:36:56.735198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:56.735202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:56.735206 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:56.735210 | orchestrator | 2026-02-18 03:36:56.735213 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-18 03:36:56.735217 | orchestrator | Wednesday 18 February 2026 03:36:53 +0000 (0:00:00.179) 0:00:26.129 **** 2026-02-18 03:36:56.735221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 03:36:56.735225 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 03:36:56.735228 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:36:56.735232 | orchestrator | 2026-02-18 03:36:56.735236 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-18 03:36:56.735240 | orchestrator | Wednesday 18 February 2026 03:36:53 +0000 (0:00:00.161) 0:00:26.291 **** 2026-02-18 03:36:56.735255 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 03:36:56.735259 | orchestrator |  "lvm_report": { 2026-02-18 03:36:56.735263 | orchestrator |  "lv": [ 2026-02-18 03:36:56.735267 | orchestrator |  { 2026-02-18 03:36:56.735271 | orchestrator |  "lv_name": "osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31", 2026-02-18 03:36:56.735275 | orchestrator |  "vg_name": "ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31" 2026-02-18 03:36:56.735279 | orchestrator |  }, 2026-02-18 03:36:56.735283 | orchestrator |  { 2026-02-18 03:36:56.735287 | orchestrator |  "lv_name": "osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2", 2026-02-18 03:36:56.735290 | orchestrator |  "vg_name": "ceph-c707e11d-d3db-5907-b25a-51e31fa350e2" 2026-02-18 03:36:56.735294 | orchestrator |  } 2026-02-18 03:36:56.735298 | orchestrator |  ], 2026-02-18 03:36:56.735302 | orchestrator |  "pv": [ 2026-02-18 03:36:56.735305 | orchestrator |  { 2026-02-18 03:36:56.735309 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-18 03:36:56.735313 | orchestrator |  "vg_name": "ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31" 2026-02-18 03:36:56.735317 | orchestrator |  }, 2026-02-18 03:36:56.735321 | orchestrator |  { 2026-02-18 03:36:56.735328 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-18 03:36:56.735332 | orchestrator |  "vg_name": "ceph-c707e11d-d3db-5907-b25a-51e31fa350e2" 2026-02-18 03:36:56.735335 | orchestrator |  } 2026-02-18 03:36:56.735339 | orchestrator |  ] 2026-02-18 03:36:56.735343 | orchestrator |  } 2026-02-18 03:36:56.735347 | orchestrator | } 2026-02-18 03:36:56.735355 | orchestrator | 2026-02-18 03:36:56.735359 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-18 03:36:56.735362 | orchestrator | 2026-02-18 03:36:56.735366 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-18 03:36:56.735370 | orchestrator | Wednesday 18 February 2026 03:36:53 +0000 (0:00:00.331) 0:00:26.622 **** 2026-02-18 03:36:56.735374 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-18 03:36:56.735378 | orchestrator | 2026-02-18 03:36:56.735382 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-18 03:36:56.735386 | orchestrator | Wednesday 18 February 2026 03:36:54 +0000 (0:00:00.300) 0:00:26.922 **** 2026-02-18 03:36:56.735390 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:36:56.735393 | orchestrator | 2026-02-18 03:36:56.735399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:56.735406 | orchestrator | Wednesday 18 February 2026 03:36:54 +0000 (0:00:00.270) 0:00:27.193 **** 2026-02-18 03:36:56.735412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-18 03:36:56.735423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-18 03:36:56.735431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-18 03:36:56.735437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-18 03:36:56.735443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-18 03:36:56.735450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-18 03:36:56.735457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-18 03:36:56.735463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-18 03:36:56.735470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-18 03:36:56.735476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-18 03:36:56.735481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-18 03:36:56.735487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-18 03:36:56.735492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-18 03:36:56.735499 | orchestrator | 2026-02-18 03:36:56.735506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:56.735513 | orchestrator | Wednesday 18 February 2026 03:36:54 +0000 (0:00:00.473) 0:00:27.666 **** 2026-02-18 03:36:56.735520 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:36:56.735526 | orchestrator | 2026-02-18 03:36:56.735534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:56.735542 | orchestrator | Wednesday 18 February 2026 03:36:55 +0000 (0:00:00.205) 0:00:27.872 **** 2026-02-18 03:36:56.735550 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:36:56.735556 | orchestrator | 2026-02-18 03:36:56.735563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:56.735570 | orchestrator | Wednesday 18 February 2026 03:36:55 +0000 (0:00:00.704) 0:00:28.576 **** 2026-02-18 03:36:56.735578 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:36:56.735586 | orchestrator | 2026-02-18 03:36:56.735592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:56.735600 | orchestrator | Wednesday 18 February 2026 03:36:56 +0000 (0:00:00.231) 0:00:28.808 **** 2026-02-18 03:36:56.735607 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:36:56.735614 | orchestrator | 2026-02-18 03:36:56.735620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:56.735626 | orchestrator | Wednesday 18 February 2026 03:36:56 +0000 (0:00:00.222) 0:00:29.030 **** 2026-02-18 03:36:56.735639 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:36:56.735645 | orchestrator | 2026-02-18 03:36:56.735651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:36:56.735658 | orchestrator | Wednesday 18 February 2026 03:36:56 +0000 (0:00:00.248) 0:00:29.279 **** 2026-02-18 03:36:56.735664 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:36:56.735670 | orchestrator | 2026-02-18 03:36:56.735684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:08.522339 | orchestrator | Wednesday 18 February 2026 03:36:56 +0000 (0:00:00.216) 0:00:29.496 **** 2026-02-18 03:37:08.522436 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.522447 | orchestrator | 2026-02-18 03:37:08.522456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:08.522464 | orchestrator | Wednesday 18 February 2026 03:36:56 +0000 (0:00:00.209) 0:00:29.705 **** 2026-02-18 03:37:08.522471 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.522478 | orchestrator | 2026-02-18 03:37:08.522497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:08.522503 | orchestrator | Wednesday 18 February 2026 03:36:57 +0000 (0:00:00.245) 0:00:29.951 **** 2026-02-18 03:37:08.522509 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a) 2026-02-18 03:37:08.522517 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a) 2026-02-18 03:37:08.522531 | orchestrator | 2026-02-18 03:37:08.522554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:08.522561 | orchestrator | Wednesday 18 February 2026 03:36:57 +0000 (0:00:00.443) 0:00:30.394 **** 2026-02-18 03:37:08.522568 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3) 2026-02-18 03:37:08.522574 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3) 2026-02-18 03:37:08.522581 | orchestrator | 2026-02-18 03:37:08.522587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:08.522594 | orchestrator | Wednesday 18 February 2026 03:36:58 +0000 (0:00:00.468) 0:00:30.863 **** 2026-02-18 03:37:08.522601 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19) 2026-02-18 03:37:08.522607 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19) 2026-02-18 03:37:08.522613 | orchestrator | 2026-02-18 03:37:08.522619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:08.522624 | orchestrator | Wednesday 18 February 2026 03:36:58 +0000 (0:00:00.754) 0:00:31.618 **** 2026-02-18 03:37:08.522630 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b) 2026-02-18 03:37:08.522636 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b) 2026-02-18 03:37:08.522643 | orchestrator | 2026-02-18 03:37:08.522649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:08.522654 | orchestrator | Wednesday 18 February 2026 03:36:59 +0000 (0:00:01.049) 0:00:32.667 **** 2026-02-18 03:37:08.522659 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-18 03:37:08.522666 | orchestrator | 2026-02-18 03:37:08.522673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.522679 | orchestrator | Wednesday 18 February 2026 03:37:00 +0000 (0:00:00.375) 0:00:33.043 **** 2026-02-18 03:37:08.522685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-18 03:37:08.522693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-18 03:37:08.522701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-18 03:37:08.522725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-18 03:37:08.522732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-18 03:37:08.522779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-18 03:37:08.522786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-18 03:37:08.522793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-18 03:37:08.522799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-18 03:37:08.522806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-18 03:37:08.522812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-18 03:37:08.522818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-18 03:37:08.522825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-18 03:37:08.522831 | orchestrator | 2026-02-18 03:37:08.522837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.522844 | orchestrator | Wednesday 18 February 2026 03:37:00 +0000 (0:00:00.475) 0:00:33.519 **** 2026-02-18 03:37:08.522850 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.522857 | orchestrator | 2026-02-18 03:37:08.522863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.522870 | orchestrator | Wednesday 18 February 2026 03:37:00 +0000 (0:00:00.225) 0:00:33.744 **** 2026-02-18 03:37:08.522876 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.522882 | orchestrator | 2026-02-18 03:37:08.522888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.522894 | orchestrator | Wednesday 18 February 2026 03:37:01 +0000 (0:00:00.223) 0:00:33.967 **** 2026-02-18 03:37:08.522900 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.522906 | orchestrator | 2026-02-18 03:37:08.522934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.522941 | orchestrator | Wednesday 18 February 2026 03:37:01 +0000 (0:00:00.246) 0:00:34.214 **** 2026-02-18 03:37:08.522947 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.522954 | orchestrator | 2026-02-18 03:37:08.522960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.522967 | orchestrator | Wednesday 18 February 2026 03:37:01 +0000 (0:00:00.258) 0:00:34.473 **** 2026-02-18 03:37:08.522973 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.522980 | orchestrator | 2026-02-18 03:37:08.522986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.522992 | orchestrator | Wednesday 18 February 2026 03:37:01 +0000 (0:00:00.214) 0:00:34.687 **** 2026-02-18 03:37:08.522998 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523004 | orchestrator | 2026-02-18 03:37:08.523010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.523016 | orchestrator | Wednesday 18 February 2026 03:37:02 +0000 (0:00:00.214) 0:00:34.902 **** 2026-02-18 03:37:08.523028 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523035 | orchestrator | 2026-02-18 03:37:08.523041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.523048 | orchestrator | Wednesday 18 February 2026 03:37:02 +0000 (0:00:00.213) 0:00:35.115 **** 2026-02-18 03:37:08.523054 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523061 | orchestrator | 2026-02-18 03:37:08.523067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.523074 | orchestrator | Wednesday 18 February 2026 03:37:03 +0000 (0:00:00.704) 0:00:35.820 **** 2026-02-18 03:37:08.523080 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-18 03:37:08.523094 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-18 03:37:08.523101 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-18 03:37:08.523107 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-18 03:37:08.523114 | orchestrator | 2026-02-18 03:37:08.523120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.523127 | orchestrator | Wednesday 18 February 2026 03:37:03 +0000 (0:00:00.732) 0:00:36.552 **** 2026-02-18 03:37:08.523133 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523140 | orchestrator | 2026-02-18 03:37:08.523147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.523153 | orchestrator | Wednesday 18 February 2026 03:37:04 +0000 (0:00:00.231) 0:00:36.784 **** 2026-02-18 03:37:08.523160 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523166 | orchestrator | 2026-02-18 03:37:08.523173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.523179 | orchestrator | Wednesday 18 February 2026 03:37:04 +0000 (0:00:00.237) 0:00:37.021 **** 2026-02-18 03:37:08.523186 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523192 | orchestrator | 2026-02-18 03:37:08.523199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:08.523205 | orchestrator | Wednesday 18 February 2026 03:37:04 +0000 (0:00:00.231) 0:00:37.253 **** 2026-02-18 03:37:08.523212 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523218 | orchestrator | 2026-02-18 03:37:08.523224 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-18 03:37:08.523231 | orchestrator | Wednesday 18 February 2026 03:37:04 +0000 (0:00:00.241) 0:00:37.494 **** 2026-02-18 03:37:08.523237 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523243 | orchestrator | 2026-02-18 03:37:08.523250 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-18 03:37:08.523256 | orchestrator | Wednesday 18 February 2026 03:37:04 +0000 (0:00:00.144) 0:00:37.639 **** 2026-02-18 03:37:08.523263 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ef111f9-34b8-55e5-9a40-00a35805e906'}}) 2026-02-18 03:37:08.523270 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47b33137-1c4f-52d4-af64-ebc2c48f95b1'}}) 2026-02-18 03:37:08.523277 | orchestrator | 2026-02-18 03:37:08.523283 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-18 03:37:08.523290 | orchestrator | Wednesday 18 February 2026 03:37:05 +0000 (0:00:00.216) 0:00:37.855 **** 2026-02-18 03:37:08.523297 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}) 2026-02-18 03:37:08.523305 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}) 2026-02-18 03:37:08.523312 | orchestrator | 2026-02-18 03:37:08.523318 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-18 03:37:08.523324 | orchestrator | Wednesday 18 February 2026 03:37:06 +0000 (0:00:01.904) 0:00:39.760 **** 2026-02-18 03:37:08.523330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:08.523338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:08.523344 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:08.523351 | orchestrator | 2026-02-18 03:37:08.523357 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-18 03:37:08.523364 | orchestrator | Wednesday 18 February 2026 03:37:07 +0000 (0:00:00.163) 0:00:39.924 **** 2026-02-18 03:37:08.523370 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}) 2026-02-18 03:37:08.523387 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}) 2026-02-18 03:37:14.925209 | orchestrator | 2026-02-18 03:37:14.925302 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-18 03:37:14.925314 | orchestrator | Wednesday 18 February 2026 03:37:08 +0000 (0:00:01.355) 0:00:41.279 **** 2026-02-18 03:37:14.925321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:14.925330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:14.925337 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925345 | orchestrator | 2026-02-18 03:37:14.925365 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-18 03:37:14.925372 | orchestrator | Wednesday 18 February 2026 03:37:08 +0000 (0:00:00.410) 0:00:41.690 **** 2026-02-18 03:37:14.925379 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925386 | orchestrator | 2026-02-18 03:37:14.925393 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-18 03:37:14.925400 | orchestrator | Wednesday 18 February 2026 03:37:09 +0000 (0:00:00.142) 0:00:41.833 **** 2026-02-18 03:37:14.925407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:14.925413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:14.925420 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925427 | orchestrator | 2026-02-18 03:37:14.925434 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-18 03:37:14.925440 | orchestrator | Wednesday 18 February 2026 03:37:09 +0000 (0:00:00.168) 0:00:42.001 **** 2026-02-18 03:37:14.925447 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925454 | orchestrator | 2026-02-18 03:37:14.925460 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-18 03:37:14.925467 | orchestrator | Wednesday 18 February 2026 03:37:09 +0000 (0:00:00.147) 0:00:42.149 **** 2026-02-18 03:37:14.925474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:14.925481 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:14.925487 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925495 | orchestrator | 2026-02-18 03:37:14.925502 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-18 03:37:14.925509 | orchestrator | Wednesday 18 February 2026 03:37:09 +0000 (0:00:00.203) 0:00:42.352 **** 2026-02-18 03:37:14.925515 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925522 | orchestrator | 2026-02-18 03:37:14.925529 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-18 03:37:14.925535 | orchestrator | Wednesday 18 February 2026 03:37:09 +0000 (0:00:00.164) 0:00:42.517 **** 2026-02-18 03:37:14.925542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:14.925549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:14.925556 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925563 | orchestrator | 2026-02-18 03:37:14.925569 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-18 03:37:14.925591 | orchestrator | Wednesday 18 February 2026 03:37:09 +0000 (0:00:00.200) 0:00:42.717 **** 2026-02-18 03:37:14.925598 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:14.925606 | orchestrator | 2026-02-18 03:37:14.925613 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-18 03:37:14.925620 | orchestrator | Wednesday 18 February 2026 03:37:10 +0000 (0:00:00.164) 0:00:42.882 **** 2026-02-18 03:37:14.925627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:14.925633 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:14.925640 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925647 | orchestrator | 2026-02-18 03:37:14.925654 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-18 03:37:14.925660 | orchestrator | Wednesday 18 February 2026 03:37:10 +0000 (0:00:00.182) 0:00:43.064 **** 2026-02-18 03:37:14.925667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:14.925674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:14.925681 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925688 | orchestrator | 2026-02-18 03:37:14.925694 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-18 03:37:14.925740 | orchestrator | Wednesday 18 February 2026 03:37:10 +0000 (0:00:00.184) 0:00:43.249 **** 2026-02-18 03:37:14.925749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:14.925756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:14.925762 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925769 | orchestrator | 2026-02-18 03:37:14.925776 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-18 03:37:14.925782 | orchestrator | Wednesday 18 February 2026 03:37:10 +0000 (0:00:00.177) 0:00:43.426 **** 2026-02-18 03:37:14.925794 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925801 | orchestrator | 2026-02-18 03:37:14.925807 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-18 03:37:14.925814 | orchestrator | Wednesday 18 February 2026 03:37:11 +0000 (0:00:00.375) 0:00:43.802 **** 2026-02-18 03:37:14.925821 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925827 | orchestrator | 2026-02-18 03:37:14.925834 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-18 03:37:14.925841 | orchestrator | Wednesday 18 February 2026 03:37:11 +0000 (0:00:00.153) 0:00:43.955 **** 2026-02-18 03:37:14.925849 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.925860 | orchestrator | 2026-02-18 03:37:14.925871 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-18 03:37:14.925882 | orchestrator | Wednesday 18 February 2026 03:37:11 +0000 (0:00:00.149) 0:00:44.105 **** 2026-02-18 03:37:14.925893 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 03:37:14.925904 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-18 03:37:14.925916 | orchestrator | } 2026-02-18 03:37:14.925927 | orchestrator | 2026-02-18 03:37:14.925939 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-18 03:37:14.925951 | orchestrator | Wednesday 18 February 2026 03:37:11 +0000 (0:00:00.155) 0:00:44.260 **** 2026-02-18 03:37:14.925964 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 03:37:14.925971 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-18 03:37:14.925991 | orchestrator | } 2026-02-18 03:37:14.926003 | orchestrator | 2026-02-18 03:37:14.926072 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-18 03:37:14.926085 | orchestrator | Wednesday 18 February 2026 03:37:11 +0000 (0:00:00.164) 0:00:44.424 **** 2026-02-18 03:37:14.926092 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 03:37:14.926100 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-18 03:37:14.926111 | orchestrator | } 2026-02-18 03:37:14.926122 | orchestrator | 2026-02-18 03:37:14.926133 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-18 03:37:14.926144 | orchestrator | Wednesday 18 February 2026 03:37:11 +0000 (0:00:00.152) 0:00:44.577 **** 2026-02-18 03:37:14.926156 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:14.926167 | orchestrator | 2026-02-18 03:37:14.926179 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-18 03:37:14.926197 | orchestrator | Wednesday 18 February 2026 03:37:12 +0000 (0:00:00.586) 0:00:45.163 **** 2026-02-18 03:37:14.926205 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:14.926211 | orchestrator | 2026-02-18 03:37:14.926218 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-18 03:37:14.926224 | orchestrator | Wednesday 18 February 2026 03:37:12 +0000 (0:00:00.546) 0:00:45.709 **** 2026-02-18 03:37:14.926231 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:14.926238 | orchestrator | 2026-02-18 03:37:14.926244 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-18 03:37:14.926251 | orchestrator | Wednesday 18 February 2026 03:37:13 +0000 (0:00:00.551) 0:00:46.261 **** 2026-02-18 03:37:14.926258 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:14.926264 | orchestrator | 2026-02-18 03:37:14.926272 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-18 03:37:14.926284 | orchestrator | Wednesday 18 February 2026 03:37:13 +0000 (0:00:00.164) 0:00:46.426 **** 2026-02-18 03:37:14.926295 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.926306 | orchestrator | 2026-02-18 03:37:14.926318 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-18 03:37:14.926329 | orchestrator | Wednesday 18 February 2026 03:37:13 +0000 (0:00:00.138) 0:00:46.564 **** 2026-02-18 03:37:14.926340 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.926351 | orchestrator | 2026-02-18 03:37:14.926362 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-18 03:37:14.926374 | orchestrator | Wednesday 18 February 2026 03:37:14 +0000 (0:00:00.362) 0:00:46.927 **** 2026-02-18 03:37:14.926382 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 03:37:14.926389 | orchestrator |  "vgs_report": { 2026-02-18 03:37:14.926396 | orchestrator |  "vg": [] 2026-02-18 03:37:14.926403 | orchestrator |  } 2026-02-18 03:37:14.926409 | orchestrator | } 2026-02-18 03:37:14.926416 | orchestrator | 2026-02-18 03:37:14.926423 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-18 03:37:14.926429 | orchestrator | Wednesday 18 February 2026 03:37:14 +0000 (0:00:00.157) 0:00:47.085 **** 2026-02-18 03:37:14.926436 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.926442 | orchestrator | 2026-02-18 03:37:14.926449 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-18 03:37:14.926455 | orchestrator | Wednesday 18 February 2026 03:37:14 +0000 (0:00:00.164) 0:00:47.250 **** 2026-02-18 03:37:14.926462 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.926468 | orchestrator | 2026-02-18 03:37:14.926475 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-18 03:37:14.926482 | orchestrator | Wednesday 18 February 2026 03:37:14 +0000 (0:00:00.151) 0:00:47.401 **** 2026-02-18 03:37:14.926488 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.926495 | orchestrator | 2026-02-18 03:37:14.926501 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-18 03:37:14.926508 | orchestrator | Wednesday 18 February 2026 03:37:14 +0000 (0:00:00.133) 0:00:47.534 **** 2026-02-18 03:37:14.926522 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:14.926528 | orchestrator | 2026-02-18 03:37:14.926541 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-18 03:37:20.179432 | orchestrator | Wednesday 18 February 2026 03:37:14 +0000 (0:00:00.155) 0:00:47.690 **** 2026-02-18 03:37:20.179542 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.179560 | orchestrator | 2026-02-18 03:37:20.179573 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-18 03:37:20.179585 | orchestrator | Wednesday 18 February 2026 03:37:15 +0000 (0:00:00.161) 0:00:47.851 **** 2026-02-18 03:37:20.179596 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.179612 | orchestrator | 2026-02-18 03:37:20.179632 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-18 03:37:20.179652 | orchestrator | Wednesday 18 February 2026 03:37:15 +0000 (0:00:00.153) 0:00:48.005 **** 2026-02-18 03:37:20.179672 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.179750 | orchestrator | 2026-02-18 03:37:20.179790 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-18 03:37:20.179808 | orchestrator | Wednesday 18 February 2026 03:37:15 +0000 (0:00:00.134) 0:00:48.139 **** 2026-02-18 03:37:20.179823 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.179840 | orchestrator | 2026-02-18 03:37:20.179857 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-18 03:37:20.179873 | orchestrator | Wednesday 18 February 2026 03:37:15 +0000 (0:00:00.142) 0:00:48.282 **** 2026-02-18 03:37:20.179890 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.179907 | orchestrator | 2026-02-18 03:37:20.179925 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-18 03:37:20.179943 | orchestrator | Wednesday 18 February 2026 03:37:15 +0000 (0:00:00.151) 0:00:48.433 **** 2026-02-18 03:37:20.179962 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.179982 | orchestrator | 2026-02-18 03:37:20.180002 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-18 03:37:20.180022 | orchestrator | Wednesday 18 February 2026 03:37:16 +0000 (0:00:00.394) 0:00:48.827 **** 2026-02-18 03:37:20.180042 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180061 | orchestrator | 2026-02-18 03:37:20.180080 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-18 03:37:20.180099 | orchestrator | Wednesday 18 February 2026 03:37:16 +0000 (0:00:00.164) 0:00:48.991 **** 2026-02-18 03:37:20.180118 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180138 | orchestrator | 2026-02-18 03:37:20.180157 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-18 03:37:20.180177 | orchestrator | Wednesday 18 February 2026 03:37:16 +0000 (0:00:00.156) 0:00:49.148 **** 2026-02-18 03:37:20.180190 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180203 | orchestrator | 2026-02-18 03:37:20.180216 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-18 03:37:20.180229 | orchestrator | Wednesday 18 February 2026 03:37:16 +0000 (0:00:00.137) 0:00:49.285 **** 2026-02-18 03:37:20.180241 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180253 | orchestrator | 2026-02-18 03:37:20.180266 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-18 03:37:20.180278 | orchestrator | Wednesday 18 February 2026 03:37:16 +0000 (0:00:00.150) 0:00:49.435 **** 2026-02-18 03:37:20.180292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180317 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180328 | orchestrator | 2026-02-18 03:37:20.180339 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-18 03:37:20.180378 | orchestrator | Wednesday 18 February 2026 03:37:16 +0000 (0:00:00.169) 0:00:49.605 **** 2026-02-18 03:37:20.180389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180411 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180421 | orchestrator | 2026-02-18 03:37:20.180432 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-18 03:37:20.180442 | orchestrator | Wednesday 18 February 2026 03:37:17 +0000 (0:00:00.178) 0:00:49.783 **** 2026-02-18 03:37:20.180453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180474 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180486 | orchestrator | 2026-02-18 03:37:20.180497 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-18 03:37:20.180515 | orchestrator | Wednesday 18 February 2026 03:37:17 +0000 (0:00:00.205) 0:00:49.989 **** 2026-02-18 03:37:20.180534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180552 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180570 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180590 | orchestrator | 2026-02-18 03:37:20.180635 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-18 03:37:20.180651 | orchestrator | Wednesday 18 February 2026 03:37:17 +0000 (0:00:00.196) 0:00:50.185 **** 2026-02-18 03:37:20.180662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180712 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180725 | orchestrator | 2026-02-18 03:37:20.180745 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-18 03:37:20.180757 | orchestrator | Wednesday 18 February 2026 03:37:17 +0000 (0:00:00.181) 0:00:50.366 **** 2026-02-18 03:37:20.180767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180789 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180799 | orchestrator | 2026-02-18 03:37:20.180810 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-18 03:37:20.180821 | orchestrator | Wednesday 18 February 2026 03:37:17 +0000 (0:00:00.177) 0:00:50.543 **** 2026-02-18 03:37:20.180831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180842 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180853 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180874 | orchestrator | 2026-02-18 03:37:20.180885 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-18 03:37:20.180895 | orchestrator | Wednesday 18 February 2026 03:37:18 +0000 (0:00:00.424) 0:00:50.968 **** 2026-02-18 03:37:20.180906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.180917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.180927 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.180940 | orchestrator | 2026-02-18 03:37:20.180960 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-18 03:37:20.180979 | orchestrator | Wednesday 18 February 2026 03:37:18 +0000 (0:00:00.175) 0:00:51.143 **** 2026-02-18 03:37:20.181000 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:20.181020 | orchestrator | 2026-02-18 03:37:20.181039 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-18 03:37:20.181051 | orchestrator | Wednesday 18 February 2026 03:37:18 +0000 (0:00:00.546) 0:00:51.689 **** 2026-02-18 03:37:20.181062 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:20.181072 | orchestrator | 2026-02-18 03:37:20.181083 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-18 03:37:20.181094 | orchestrator | Wednesday 18 February 2026 03:37:19 +0000 (0:00:00.532) 0:00:52.222 **** 2026-02-18 03:37:20.181104 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:37:20.181115 | orchestrator | 2026-02-18 03:37:20.181125 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-18 03:37:20.181136 | orchestrator | Wednesday 18 February 2026 03:37:19 +0000 (0:00:00.164) 0:00:52.387 **** 2026-02-18 03:37:20.181147 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'vg_name': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}) 2026-02-18 03:37:20.181159 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'vg_name': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}) 2026-02-18 03:37:20.181169 | orchestrator | 2026-02-18 03:37:20.181180 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-18 03:37:20.181191 | orchestrator | Wednesday 18 February 2026 03:37:19 +0000 (0:00:00.188) 0:00:52.575 **** 2026-02-18 03:37:20.181201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.181212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:20.181223 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:20.181234 | orchestrator | 2026-02-18 03:37:20.181244 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-18 03:37:20.181255 | orchestrator | Wednesday 18 February 2026 03:37:19 +0000 (0:00:00.168) 0:00:52.744 **** 2026-02-18 03:37:20.181266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:20.181285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:27.141920 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:27.142078 | orchestrator | 2026-02-18 03:37:27.142095 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-18 03:37:27.142108 | orchestrator | Wednesday 18 February 2026 03:37:20 +0000 (0:00:00.200) 0:00:52.944 **** 2026-02-18 03:37:27.142120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 03:37:27.142168 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 03:37:27.142179 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:37:27.142188 | orchestrator | 2026-02-18 03:37:27.142200 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-18 03:37:27.142210 | orchestrator | Wednesday 18 February 2026 03:37:20 +0000 (0:00:00.217) 0:00:53.162 **** 2026-02-18 03:37:27.142222 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 03:37:27.142232 | orchestrator |  "lvm_report": { 2026-02-18 03:37:27.142243 | orchestrator |  "lv": [ 2026-02-18 03:37:27.142253 | orchestrator |  { 2026-02-18 03:37:27.142263 | orchestrator |  "lv_name": "osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1", 2026-02-18 03:37:27.142274 | orchestrator |  "vg_name": "ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1" 2026-02-18 03:37:27.142283 | orchestrator |  }, 2026-02-18 03:37:27.142293 | orchestrator |  { 2026-02-18 03:37:27.142302 | orchestrator |  "lv_name": "osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906", 2026-02-18 03:37:27.142312 | orchestrator |  "vg_name": "ceph-8ef111f9-34b8-55e5-9a40-00a35805e906" 2026-02-18 03:37:27.142322 | orchestrator |  } 2026-02-18 03:37:27.142331 | orchestrator |  ], 2026-02-18 03:37:27.142342 | orchestrator |  "pv": [ 2026-02-18 03:37:27.142353 | orchestrator |  { 2026-02-18 03:37:27.142363 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-18 03:37:27.142374 | orchestrator |  "vg_name": "ceph-8ef111f9-34b8-55e5-9a40-00a35805e906" 2026-02-18 03:37:27.142386 | orchestrator |  }, 2026-02-18 03:37:27.142397 | orchestrator |  { 2026-02-18 03:37:27.142406 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-18 03:37:27.142416 | orchestrator |  "vg_name": "ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1" 2026-02-18 03:37:27.142425 | orchestrator |  } 2026-02-18 03:37:27.142435 | orchestrator |  ] 2026-02-18 03:37:27.142445 | orchestrator |  } 2026-02-18 03:37:27.142456 | orchestrator | } 2026-02-18 03:37:27.142466 | orchestrator | 2026-02-18 03:37:27.142477 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-18 03:37:27.142488 | orchestrator | 2026-02-18 03:37:27.142498 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-18 03:37:27.142509 | orchestrator | Wednesday 18 February 2026 03:37:20 +0000 (0:00:00.302) 0:00:53.465 **** 2026-02-18 03:37:27.142520 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-18 03:37:27.142533 | orchestrator | 2026-02-18 03:37:27.142544 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-18 03:37:27.142555 | orchestrator | Wednesday 18 February 2026 03:37:21 +0000 (0:00:00.720) 0:00:54.185 **** 2026-02-18 03:37:27.142565 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:27.142575 | orchestrator | 2026-02-18 03:37:27.142586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.142596 | orchestrator | Wednesday 18 February 2026 03:37:21 +0000 (0:00:00.250) 0:00:54.436 **** 2026-02-18 03:37:27.142608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-18 03:37:27.142619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-18 03:37:27.142630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-18 03:37:27.142642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-18 03:37:27.142675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-18 03:37:27.142687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-18 03:37:27.142697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-18 03:37:27.142717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-18 03:37:27.142727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-18 03:37:27.142736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-18 03:37:27.142745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-18 03:37:27.142754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-18 03:37:27.142764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-18 03:37:27.142773 | orchestrator | 2026-02-18 03:37:27.142783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.142792 | orchestrator | Wednesday 18 February 2026 03:37:22 +0000 (0:00:00.454) 0:00:54.890 **** 2026-02-18 03:37:27.142801 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.142810 | orchestrator | 2026-02-18 03:37:27.142818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.142827 | orchestrator | Wednesday 18 February 2026 03:37:22 +0000 (0:00:00.216) 0:00:55.107 **** 2026-02-18 03:37:27.142836 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.142845 | orchestrator | 2026-02-18 03:37:27.142854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.142881 | orchestrator | Wednesday 18 February 2026 03:37:22 +0000 (0:00:00.219) 0:00:55.326 **** 2026-02-18 03:37:27.142890 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.142899 | orchestrator | 2026-02-18 03:37:27.142908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.142917 | orchestrator | Wednesday 18 February 2026 03:37:22 +0000 (0:00:00.237) 0:00:55.564 **** 2026-02-18 03:37:27.142926 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.142934 | orchestrator | 2026-02-18 03:37:27.142942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.142950 | orchestrator | Wednesday 18 February 2026 03:37:23 +0000 (0:00:00.230) 0:00:55.795 **** 2026-02-18 03:37:27.142958 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.142965 | orchestrator | 2026-02-18 03:37:27.142973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.142980 | orchestrator | Wednesday 18 February 2026 03:37:23 +0000 (0:00:00.207) 0:00:56.003 **** 2026-02-18 03:37:27.142988 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.142995 | orchestrator | 2026-02-18 03:37:27.143003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.143011 | orchestrator | Wednesday 18 February 2026 03:37:23 +0000 (0:00:00.214) 0:00:56.217 **** 2026-02-18 03:37:27.143019 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.143029 | orchestrator | 2026-02-18 03:37:27.143038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.143047 | orchestrator | Wednesday 18 February 2026 03:37:23 +0000 (0:00:00.213) 0:00:56.431 **** 2026-02-18 03:37:27.143056 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:27.143065 | orchestrator | 2026-02-18 03:37:27.143074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.143083 | orchestrator | Wednesday 18 February 2026 03:37:24 +0000 (0:00:00.717) 0:00:57.148 **** 2026-02-18 03:37:27.143091 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039) 2026-02-18 03:37:27.143101 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039) 2026-02-18 03:37:27.143108 | orchestrator | 2026-02-18 03:37:27.143117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.143125 | orchestrator | Wednesday 18 February 2026 03:37:24 +0000 (0:00:00.498) 0:00:57.647 **** 2026-02-18 03:37:27.143172 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322) 2026-02-18 03:37:27.143187 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322) 2026-02-18 03:37:27.143196 | orchestrator | 2026-02-18 03:37:27.143205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.143214 | orchestrator | Wednesday 18 February 2026 03:37:25 +0000 (0:00:00.472) 0:00:58.120 **** 2026-02-18 03:37:27.143223 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d) 2026-02-18 03:37:27.143232 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d) 2026-02-18 03:37:27.143240 | orchestrator | 2026-02-18 03:37:27.143247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.143255 | orchestrator | Wednesday 18 February 2026 03:37:25 +0000 (0:00:00.488) 0:00:58.608 **** 2026-02-18 03:37:27.143263 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d) 2026-02-18 03:37:27.143271 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d) 2026-02-18 03:37:27.143278 | orchestrator | 2026-02-18 03:37:27.143286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-18 03:37:27.143294 | orchestrator | Wednesday 18 February 2026 03:37:26 +0000 (0:00:00.465) 0:00:59.074 **** 2026-02-18 03:37:27.143302 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-18 03:37:27.143309 | orchestrator | 2026-02-18 03:37:27.143317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:27.143325 | orchestrator | Wednesday 18 February 2026 03:37:26 +0000 (0:00:00.375) 0:00:59.450 **** 2026-02-18 03:37:27.143332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-18 03:37:27.143340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-18 03:37:27.143349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-18 03:37:27.143356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-18 03:37:27.143364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-18 03:37:27.143372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-18 03:37:27.143380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-18 03:37:27.143388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-18 03:37:27.143395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-18 03:37:27.143403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-18 03:37:27.143411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-18 03:37:27.143424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-18 03:37:36.677938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-18 03:37:36.678118 | orchestrator | 2026-02-18 03:37:36.678142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678164 | orchestrator | Wednesday 18 February 2026 03:37:27 +0000 (0:00:00.452) 0:00:59.903 **** 2026-02-18 03:37:36.678179 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678193 | orchestrator | 2026-02-18 03:37:36.678205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678235 | orchestrator | Wednesday 18 February 2026 03:37:27 +0000 (0:00:00.255) 0:01:00.159 **** 2026-02-18 03:37:36.678243 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678269 | orchestrator | 2026-02-18 03:37:36.678277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678284 | orchestrator | Wednesday 18 February 2026 03:37:27 +0000 (0:00:00.252) 0:01:00.411 **** 2026-02-18 03:37:36.678291 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678298 | orchestrator | 2026-02-18 03:37:36.678305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678313 | orchestrator | Wednesday 18 February 2026 03:37:27 +0000 (0:00:00.220) 0:01:00.632 **** 2026-02-18 03:37:36.678320 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678327 | orchestrator | 2026-02-18 03:37:36.678334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678341 | orchestrator | Wednesday 18 February 2026 03:37:28 +0000 (0:00:00.226) 0:01:00.858 **** 2026-02-18 03:37:36.678348 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678356 | orchestrator | 2026-02-18 03:37:36.678364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678391 | orchestrator | Wednesday 18 February 2026 03:37:28 +0000 (0:00:00.749) 0:01:01.608 **** 2026-02-18 03:37:36.678402 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678413 | orchestrator | 2026-02-18 03:37:36.678425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678438 | orchestrator | Wednesday 18 February 2026 03:37:29 +0000 (0:00:00.226) 0:01:01.835 **** 2026-02-18 03:37:36.678450 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678462 | orchestrator | 2026-02-18 03:37:36.678471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678480 | orchestrator | Wednesday 18 February 2026 03:37:29 +0000 (0:00:00.226) 0:01:02.061 **** 2026-02-18 03:37:36.678488 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678497 | orchestrator | 2026-02-18 03:37:36.678506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678514 | orchestrator | Wednesday 18 February 2026 03:37:29 +0000 (0:00:00.227) 0:01:02.288 **** 2026-02-18 03:37:36.678523 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-18 03:37:36.678532 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-18 03:37:36.678541 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-18 03:37:36.678549 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-18 03:37:36.678557 | orchestrator | 2026-02-18 03:37:36.678565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678574 | orchestrator | Wednesday 18 February 2026 03:37:30 +0000 (0:00:00.719) 0:01:03.008 **** 2026-02-18 03:37:36.678582 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678590 | orchestrator | 2026-02-18 03:37:36.678598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678606 | orchestrator | Wednesday 18 February 2026 03:37:30 +0000 (0:00:00.250) 0:01:03.259 **** 2026-02-18 03:37:36.678638 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678716 | orchestrator | 2026-02-18 03:37:36.678726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678734 | orchestrator | Wednesday 18 February 2026 03:37:30 +0000 (0:00:00.215) 0:01:03.474 **** 2026-02-18 03:37:36.678741 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678748 | orchestrator | 2026-02-18 03:37:36.678755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-18 03:37:36.678762 | orchestrator | Wednesday 18 February 2026 03:37:30 +0000 (0:00:00.231) 0:01:03.705 **** 2026-02-18 03:37:36.678769 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678777 | orchestrator | 2026-02-18 03:37:36.678784 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-18 03:37:36.678791 | orchestrator | Wednesday 18 February 2026 03:37:31 +0000 (0:00:00.232) 0:01:03.938 **** 2026-02-18 03:37:36.678798 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678805 | orchestrator | 2026-02-18 03:37:36.678825 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-18 03:37:36.678839 | orchestrator | Wednesday 18 February 2026 03:37:31 +0000 (0:00:00.137) 0:01:04.075 **** 2026-02-18 03:37:36.678853 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b4fe298a-487d-5630-bf9a-8376c13eb8c3'}}) 2026-02-18 03:37:36.678866 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}}) 2026-02-18 03:37:36.678878 | orchestrator | 2026-02-18 03:37:36.678891 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-18 03:37:36.678904 | orchestrator | Wednesday 18 February 2026 03:37:31 +0000 (0:00:00.214) 0:01:04.289 **** 2026-02-18 03:37:36.678919 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}) 2026-02-18 03:37:36.678930 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}) 2026-02-18 03:37:36.678937 | orchestrator | 2026-02-18 03:37:36.678944 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-18 03:37:36.678968 | orchestrator | Wednesday 18 February 2026 03:37:33 +0000 (0:00:01.883) 0:01:06.173 **** 2026-02-18 03:37:36.678976 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:36.678984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:36.678991 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.678998 | orchestrator | 2026-02-18 03:37:36.679012 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-18 03:37:36.679019 | orchestrator | Wednesday 18 February 2026 03:37:33 +0000 (0:00:00.408) 0:01:06.581 **** 2026-02-18 03:37:36.679027 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}) 2026-02-18 03:37:36.679034 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}) 2026-02-18 03:37:36.679041 | orchestrator | 2026-02-18 03:37:36.679048 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-18 03:37:36.679055 | orchestrator | Wednesday 18 February 2026 03:37:35 +0000 (0:00:01.390) 0:01:07.971 **** 2026-02-18 03:37:36.679063 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:36.679070 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:36.679077 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.679084 | orchestrator | 2026-02-18 03:37:36.679091 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-18 03:37:36.679098 | orchestrator | Wednesday 18 February 2026 03:37:35 +0000 (0:00:00.172) 0:01:08.144 **** 2026-02-18 03:37:36.679106 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.679113 | orchestrator | 2026-02-18 03:37:36.679120 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-18 03:37:36.679127 | orchestrator | Wednesday 18 February 2026 03:37:35 +0000 (0:00:00.133) 0:01:08.278 **** 2026-02-18 03:37:36.679134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:36.679141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:36.679154 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.679161 | orchestrator | 2026-02-18 03:37:36.679168 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-18 03:37:36.679175 | orchestrator | Wednesday 18 February 2026 03:37:35 +0000 (0:00:00.171) 0:01:08.449 **** 2026-02-18 03:37:36.679182 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.679189 | orchestrator | 2026-02-18 03:37:36.679196 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-18 03:37:36.679204 | orchestrator | Wednesday 18 February 2026 03:37:35 +0000 (0:00:00.154) 0:01:08.603 **** 2026-02-18 03:37:36.679211 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:36.679223 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:36.679235 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.679248 | orchestrator | 2026-02-18 03:37:36.679260 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-18 03:37:36.679272 | orchestrator | Wednesday 18 February 2026 03:37:35 +0000 (0:00:00.167) 0:01:08.770 **** 2026-02-18 03:37:36.679284 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.679291 | orchestrator | 2026-02-18 03:37:36.679298 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-18 03:37:36.679305 | orchestrator | Wednesday 18 February 2026 03:37:36 +0000 (0:00:00.149) 0:01:08.920 **** 2026-02-18 03:37:36.679312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:36.679320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:36.679327 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:36.679334 | orchestrator | 2026-02-18 03:37:36.679341 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-18 03:37:36.679348 | orchestrator | Wednesday 18 February 2026 03:37:36 +0000 (0:00:00.185) 0:01:09.106 **** 2026-02-18 03:37:36.679356 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:36.679363 | orchestrator | 2026-02-18 03:37:36.679370 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-18 03:37:36.679378 | orchestrator | Wednesday 18 February 2026 03:37:36 +0000 (0:00:00.165) 0:01:09.272 **** 2026-02-18 03:37:36.679396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:43.457674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:43.457850 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.457881 | orchestrator | 2026-02-18 03:37:43.457902 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-18 03:37:43.457922 | orchestrator | Wednesday 18 February 2026 03:37:36 +0000 (0:00:00.169) 0:01:09.442 **** 2026-02-18 03:37:43.458884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:43.458942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:43.458965 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.458986 | orchestrator | 2026-02-18 03:37:43.459007 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-18 03:37:43.459028 | orchestrator | Wednesday 18 February 2026 03:37:36 +0000 (0:00:00.165) 0:01:09.607 **** 2026-02-18 03:37:43.459081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:43.459102 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:43.459121 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.459140 | orchestrator | 2026-02-18 03:37:43.459159 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-18 03:37:43.459176 | orchestrator | Wednesday 18 February 2026 03:37:37 +0000 (0:00:00.407) 0:01:10.015 **** 2026-02-18 03:37:43.459194 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.459214 | orchestrator | 2026-02-18 03:37:43.459233 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-18 03:37:43.459254 | orchestrator | Wednesday 18 February 2026 03:37:37 +0000 (0:00:00.154) 0:01:10.169 **** 2026-02-18 03:37:43.459274 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.459296 | orchestrator | 2026-02-18 03:37:43.459317 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-18 03:37:43.459338 | orchestrator | Wednesday 18 February 2026 03:37:37 +0000 (0:00:00.153) 0:01:10.322 **** 2026-02-18 03:37:43.459358 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.459378 | orchestrator | 2026-02-18 03:37:43.459398 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-18 03:37:43.459419 | orchestrator | Wednesday 18 February 2026 03:37:37 +0000 (0:00:00.155) 0:01:10.478 **** 2026-02-18 03:37:43.459439 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 03:37:43.459462 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-18 03:37:43.459483 | orchestrator | } 2026-02-18 03:37:43.459503 | orchestrator | 2026-02-18 03:37:43.459524 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-18 03:37:43.459542 | orchestrator | Wednesday 18 February 2026 03:37:37 +0000 (0:00:00.153) 0:01:10.631 **** 2026-02-18 03:37:43.459560 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 03:37:43.459676 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-18 03:37:43.459705 | orchestrator | } 2026-02-18 03:37:43.459726 | orchestrator | 2026-02-18 03:37:43.459747 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-18 03:37:43.459767 | orchestrator | Wednesday 18 February 2026 03:37:38 +0000 (0:00:00.169) 0:01:10.800 **** 2026-02-18 03:37:43.459787 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 03:37:43.459807 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-18 03:37:43.459827 | orchestrator | } 2026-02-18 03:37:43.459846 | orchestrator | 2026-02-18 03:37:43.459866 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-18 03:37:43.459886 | orchestrator | Wednesday 18 February 2026 03:37:38 +0000 (0:00:00.159) 0:01:10.960 **** 2026-02-18 03:37:43.459906 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:43.459926 | orchestrator | 2026-02-18 03:37:43.459945 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-18 03:37:43.459965 | orchestrator | Wednesday 18 February 2026 03:37:38 +0000 (0:00:00.560) 0:01:11.520 **** 2026-02-18 03:37:43.459983 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:43.460001 | orchestrator | 2026-02-18 03:37:43.460017 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-18 03:37:43.460034 | orchestrator | Wednesday 18 February 2026 03:37:39 +0000 (0:00:00.530) 0:01:12.051 **** 2026-02-18 03:37:43.460052 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:43.460070 | orchestrator | 2026-02-18 03:37:43.460089 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-18 03:37:43.460106 | orchestrator | Wednesday 18 February 2026 03:37:39 +0000 (0:00:00.522) 0:01:12.573 **** 2026-02-18 03:37:43.460123 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:43.460141 | orchestrator | 2026-02-18 03:37:43.460159 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-18 03:37:43.460197 | orchestrator | Wednesday 18 February 2026 03:37:39 +0000 (0:00:00.148) 0:01:12.722 **** 2026-02-18 03:37:43.460215 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460234 | orchestrator | 2026-02-18 03:37:43.460251 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-18 03:37:43.460270 | orchestrator | Wednesday 18 February 2026 03:37:40 +0000 (0:00:00.119) 0:01:12.841 **** 2026-02-18 03:37:43.460289 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460307 | orchestrator | 2026-02-18 03:37:43.460325 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-18 03:37:43.460344 | orchestrator | Wednesday 18 February 2026 03:37:40 +0000 (0:00:00.378) 0:01:13.219 **** 2026-02-18 03:37:43.460361 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 03:37:43.460381 | orchestrator |  "vgs_report": { 2026-02-18 03:37:43.460399 | orchestrator |  "vg": [] 2026-02-18 03:37:43.460450 | orchestrator |  } 2026-02-18 03:37:43.460472 | orchestrator | } 2026-02-18 03:37:43.460491 | orchestrator | 2026-02-18 03:37:43.460508 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-18 03:37:43.460527 | orchestrator | Wednesday 18 February 2026 03:37:40 +0000 (0:00:00.160) 0:01:13.380 **** 2026-02-18 03:37:43.460543 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460562 | orchestrator | 2026-02-18 03:37:43.460609 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-18 03:37:43.460630 | orchestrator | Wednesday 18 February 2026 03:37:40 +0000 (0:00:00.150) 0:01:13.531 **** 2026-02-18 03:37:43.460662 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460680 | orchestrator | 2026-02-18 03:37:43.460698 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-18 03:37:43.460716 | orchestrator | Wednesday 18 February 2026 03:37:40 +0000 (0:00:00.152) 0:01:13.683 **** 2026-02-18 03:37:43.460733 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460751 | orchestrator | 2026-02-18 03:37:43.460770 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-18 03:37:43.460789 | orchestrator | Wednesday 18 February 2026 03:37:41 +0000 (0:00:00.150) 0:01:13.834 **** 2026-02-18 03:37:43.460807 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460825 | orchestrator | 2026-02-18 03:37:43.460838 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-18 03:37:43.460847 | orchestrator | Wednesday 18 February 2026 03:37:41 +0000 (0:00:00.129) 0:01:13.964 **** 2026-02-18 03:37:43.460856 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460866 | orchestrator | 2026-02-18 03:37:43.460875 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-18 03:37:43.460884 | orchestrator | Wednesday 18 February 2026 03:37:41 +0000 (0:00:00.139) 0:01:14.103 **** 2026-02-18 03:37:43.460894 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460903 | orchestrator | 2026-02-18 03:37:43.460912 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-18 03:37:43.460922 | orchestrator | Wednesday 18 February 2026 03:37:41 +0000 (0:00:00.151) 0:01:14.255 **** 2026-02-18 03:37:43.460931 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460941 | orchestrator | 2026-02-18 03:37:43.460950 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-18 03:37:43.460960 | orchestrator | Wednesday 18 February 2026 03:37:41 +0000 (0:00:00.149) 0:01:14.404 **** 2026-02-18 03:37:43.460970 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.460979 | orchestrator | 2026-02-18 03:37:43.460989 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-18 03:37:43.460998 | orchestrator | Wednesday 18 February 2026 03:37:41 +0000 (0:00:00.157) 0:01:14.562 **** 2026-02-18 03:37:43.461007 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461017 | orchestrator | 2026-02-18 03:37:43.461026 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-18 03:37:43.461035 | orchestrator | Wednesday 18 February 2026 03:37:41 +0000 (0:00:00.133) 0:01:14.695 **** 2026-02-18 03:37:43.461057 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461067 | orchestrator | 2026-02-18 03:37:43.461076 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-18 03:37:43.461085 | orchestrator | Wednesday 18 February 2026 03:37:42 +0000 (0:00:00.148) 0:01:14.844 **** 2026-02-18 03:37:43.461095 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461104 | orchestrator | 2026-02-18 03:37:43.461113 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-18 03:37:43.461123 | orchestrator | Wednesday 18 February 2026 03:37:42 +0000 (0:00:00.366) 0:01:15.211 **** 2026-02-18 03:37:43.461132 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461141 | orchestrator | 2026-02-18 03:37:43.461151 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-18 03:37:43.461160 | orchestrator | Wednesday 18 February 2026 03:37:42 +0000 (0:00:00.169) 0:01:15.380 **** 2026-02-18 03:37:43.461170 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461179 | orchestrator | 2026-02-18 03:37:43.461188 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-18 03:37:43.461198 | orchestrator | Wednesday 18 February 2026 03:37:42 +0000 (0:00:00.164) 0:01:15.545 **** 2026-02-18 03:37:43.461207 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461217 | orchestrator | 2026-02-18 03:37:43.461226 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-18 03:37:43.461236 | orchestrator | Wednesday 18 February 2026 03:37:42 +0000 (0:00:00.173) 0:01:15.718 **** 2026-02-18 03:37:43.461245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:43.461260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:43.461277 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461292 | orchestrator | 2026-02-18 03:37:43.461307 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-18 03:37:43.461323 | orchestrator | Wednesday 18 February 2026 03:37:43 +0000 (0:00:00.168) 0:01:15.886 **** 2026-02-18 03:37:43.461339 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:43.461354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:43.461371 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:43.461388 | orchestrator | 2026-02-18 03:37:43.461404 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-18 03:37:43.461421 | orchestrator | Wednesday 18 February 2026 03:37:43 +0000 (0:00:00.174) 0:01:16.060 **** 2026-02-18 03:37:43.461454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.688143 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.688357 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.688374 | orchestrator | 2026-02-18 03:37:46.688404 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-18 03:37:46.688417 | orchestrator | Wednesday 18 February 2026 03:37:43 +0000 (0:00:00.162) 0:01:16.223 **** 2026-02-18 03:37:46.688429 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.688440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.688473 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.688484 | orchestrator | 2026-02-18 03:37:46.688495 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-18 03:37:46.688507 | orchestrator | Wednesday 18 February 2026 03:37:43 +0000 (0:00:00.164) 0:01:16.387 **** 2026-02-18 03:37:46.688518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.688529 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.688540 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.688550 | orchestrator | 2026-02-18 03:37:46.688561 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-18 03:37:46.688606 | orchestrator | Wednesday 18 February 2026 03:37:43 +0000 (0:00:00.176) 0:01:16.564 **** 2026-02-18 03:37:46.688617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.688628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.688639 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.688650 | orchestrator | 2026-02-18 03:37:46.688660 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-18 03:37:46.688671 | orchestrator | Wednesday 18 February 2026 03:37:43 +0000 (0:00:00.178) 0:01:16.743 **** 2026-02-18 03:37:46.688683 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.688696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.688708 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.688720 | orchestrator | 2026-02-18 03:37:46.688733 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-18 03:37:46.688745 | orchestrator | Wednesday 18 February 2026 03:37:44 +0000 (0:00:00.185) 0:01:16.928 **** 2026-02-18 03:37:46.688758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.688771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.688783 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.688795 | orchestrator | 2026-02-18 03:37:46.688808 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-18 03:37:46.688820 | orchestrator | Wednesday 18 February 2026 03:37:44 +0000 (0:00:00.158) 0:01:17.086 **** 2026-02-18 03:37:46.688833 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:46.688846 | orchestrator | 2026-02-18 03:37:46.688859 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-18 03:37:46.688872 | orchestrator | Wednesday 18 February 2026 03:37:45 +0000 (0:00:00.774) 0:01:17.861 **** 2026-02-18 03:37:46.688883 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:46.688893 | orchestrator | 2026-02-18 03:37:46.688904 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-18 03:37:46.688916 | orchestrator | Wednesday 18 February 2026 03:37:45 +0000 (0:00:00.532) 0:01:18.393 **** 2026-02-18 03:37:46.688927 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:37:46.688937 | orchestrator | 2026-02-18 03:37:46.688948 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-18 03:37:46.688959 | orchestrator | Wednesday 18 February 2026 03:37:45 +0000 (0:00:00.161) 0:01:18.555 **** 2026-02-18 03:37:46.688978 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'vg_name': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}) 2026-02-18 03:37:46.688990 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'vg_name': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}) 2026-02-18 03:37:46.689001 | orchestrator | 2026-02-18 03:37:46.689012 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-18 03:37:46.689023 | orchestrator | Wednesday 18 February 2026 03:37:45 +0000 (0:00:00.191) 0:01:18.747 **** 2026-02-18 03:37:46.689052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.689070 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.689081 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.689092 | orchestrator | 2026-02-18 03:37:46.689103 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-18 03:37:46.689114 | orchestrator | Wednesday 18 February 2026 03:37:46 +0000 (0:00:00.158) 0:01:18.905 **** 2026-02-18 03:37:46.689124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.689135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.689146 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.689156 | orchestrator | 2026-02-18 03:37:46.689167 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-18 03:37:46.689177 | orchestrator | Wednesday 18 February 2026 03:37:46 +0000 (0:00:00.185) 0:01:19.091 **** 2026-02-18 03:37:46.689188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 03:37:46.689199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 03:37:46.689209 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:37:46.689220 | orchestrator | 2026-02-18 03:37:46.689230 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-18 03:37:46.689241 | orchestrator | Wednesday 18 February 2026 03:37:46 +0000 (0:00:00.171) 0:01:19.263 **** 2026-02-18 03:37:46.689252 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 03:37:46.689262 | orchestrator |  "lvm_report": { 2026-02-18 03:37:46.689273 | orchestrator |  "lv": [ 2026-02-18 03:37:46.689284 | orchestrator |  { 2026-02-18 03:37:46.689295 | orchestrator |  "lv_name": "osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72", 2026-02-18 03:37:46.689306 | orchestrator |  "vg_name": "ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72" 2026-02-18 03:37:46.689316 | orchestrator |  }, 2026-02-18 03:37:46.689327 | orchestrator |  { 2026-02-18 03:37:46.689338 | orchestrator |  "lv_name": "osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3", 2026-02-18 03:37:46.689348 | orchestrator |  "vg_name": "ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3" 2026-02-18 03:37:46.689359 | orchestrator |  } 2026-02-18 03:37:46.689370 | orchestrator |  ], 2026-02-18 03:37:46.689380 | orchestrator |  "pv": [ 2026-02-18 03:37:46.689391 | orchestrator |  { 2026-02-18 03:37:46.689401 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-18 03:37:46.689412 | orchestrator |  "vg_name": "ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3" 2026-02-18 03:37:46.689422 | orchestrator |  }, 2026-02-18 03:37:46.689433 | orchestrator |  { 2026-02-18 03:37:46.689443 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-18 03:37:46.689466 | orchestrator |  "vg_name": "ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72" 2026-02-18 03:37:46.689477 | orchestrator |  } 2026-02-18 03:37:46.689487 | orchestrator |  ] 2026-02-18 03:37:46.689498 | orchestrator |  } 2026-02-18 03:37:46.689509 | orchestrator | } 2026-02-18 03:37:46.689519 | orchestrator | 2026-02-18 03:37:46.689531 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:37:46.689542 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-18 03:37:46.689552 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-18 03:37:46.689583 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-18 03:37:46.689595 | orchestrator | 2026-02-18 03:37:46.689606 | orchestrator | 2026-02-18 03:37:46.689616 | orchestrator | 2026-02-18 03:37:46.689627 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:37:46.689638 | orchestrator | Wednesday 18 February 2026 03:37:46 +0000 (0:00:00.168) 0:01:19.431 **** 2026-02-18 03:37:46.689649 | orchestrator | =============================================================================== 2026-02-18 03:37:46.689659 | orchestrator | Create block VGs -------------------------------------------------------- 5.80s 2026-02-18 03:37:46.689670 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2026-02-18 03:37:46.689680 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.85s 2026-02-18 03:37:46.689691 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.85s 2026-02-18 03:37:46.689702 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2026-02-18 03:37:46.689713 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-02-18 03:37:46.689723 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-02-18 03:37:46.689734 | orchestrator | Add known links to the list of available block devices ------------------ 1.50s 2026-02-18 03:37:46.689752 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2026-02-18 03:37:47.117622 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.30s 2026-02-18 03:37:47.117724 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2026-02-18 03:37:47.117735 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-02-18 03:37:47.117760 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.88s 2026-02-18 03:37:47.117768 | orchestrator | Print LVM report data --------------------------------------------------- 0.80s 2026-02-18 03:37:47.117776 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2026-02-18 03:37:47.117795 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.77s 2026-02-18 03:37:47.117803 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-02-18 03:37:47.117821 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.75s 2026-02-18 03:37:47.117833 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-02-18 03:37:47.117852 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.75s 2026-02-18 03:37:59.674169 | orchestrator | 2026-02-18 03:37:59 | INFO  | Task 9ad4678a-8167-4ded-bd94-e4f3610477ad (facts) was prepared for execution. 2026-02-18 03:37:59.674284 | orchestrator | 2026-02-18 03:37:59 | INFO  | It takes a moment until task 9ad4678a-8167-4ded-bd94-e4f3610477ad (facts) has been started and output is visible here. 2026-02-18 03:38:13.820604 | orchestrator | 2026-02-18 03:38:13.820734 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-18 03:38:13.820773 | orchestrator | 2026-02-18 03:38:13.820783 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-18 03:38:13.820793 | orchestrator | Wednesday 18 February 2026 03:38:04 +0000 (0:00:00.302) 0:00:00.302 **** 2026-02-18 03:38:13.820802 | orchestrator | ok: [testbed-manager] 2026-02-18 03:38:13.820811 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:13.820820 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:13.820829 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:13.820837 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:13.820846 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:13.820854 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:13.820863 | orchestrator | 2026-02-18 03:38:13.820872 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-18 03:38:13.820880 | orchestrator | Wednesday 18 February 2026 03:38:05 +0000 (0:00:01.245) 0:00:01.548 **** 2026-02-18 03:38:13.820889 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:38:13.820898 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:13.820907 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:13.820915 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:13.820924 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:13.820932 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:13.820941 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:13.820949 | orchestrator | 2026-02-18 03:38:13.820958 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-18 03:38:13.820967 | orchestrator | 2026-02-18 03:38:13.820975 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 03:38:13.820984 | orchestrator | Wednesday 18 February 2026 03:38:06 +0000 (0:00:01.388) 0:00:02.936 **** 2026-02-18 03:38:13.820992 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:13.821001 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:13.821010 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:13.821018 | orchestrator | ok: [testbed-manager] 2026-02-18 03:38:13.821027 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:13.821036 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:13.821044 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:13.821053 | orchestrator | 2026-02-18 03:38:13.821061 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-18 03:38:13.821070 | orchestrator | 2026-02-18 03:38:13.821078 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-18 03:38:13.821087 | orchestrator | Wednesday 18 February 2026 03:38:12 +0000 (0:00:05.924) 0:00:08.861 **** 2026-02-18 03:38:13.821102 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:38:13.821118 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:13.821134 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:13.821149 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:13.821165 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:13.821181 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:13.821197 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:13.821213 | orchestrator | 2026-02-18 03:38:13.821228 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:38:13.821244 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:38:13.821261 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:38:13.821277 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:38:13.821294 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:38:13.821311 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:38:13.821340 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:38:13.821357 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:38:13.821372 | orchestrator | 2026-02-18 03:38:13.821388 | orchestrator | 2026-02-18 03:38:13.821404 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:38:13.821438 | orchestrator | Wednesday 18 February 2026 03:38:13 +0000 (0:00:00.571) 0:00:09.433 **** 2026-02-18 03:38:13.821507 | orchestrator | =============================================================================== 2026-02-18 03:38:13.821524 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.92s 2026-02-18 03:38:13.821540 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-02-18 03:38:13.821554 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-02-18 03:38:13.821569 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-18 03:38:16.421804 | orchestrator | 2026-02-18 03:38:16 | INFO  | Task 09ffe4fa-4790-4d98-85f3-49c6c507644a (ceph) was prepared for execution. 2026-02-18 03:38:16.421891 | orchestrator | 2026-02-18 03:38:16 | INFO  | It takes a moment until task 09ffe4fa-4790-4d98-85f3-49c6c507644a (ceph) has been started and output is visible here. 2026-02-18 03:38:35.557643 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-18 03:38:35.557762 | orchestrator | 2.16.14 2026-02-18 03:38:35.557778 | orchestrator | 2026-02-18 03:38:35.557790 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-18 03:38:35.557802 | orchestrator | 2026-02-18 03:38:35.557813 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 03:38:35.557824 | orchestrator | Wednesday 18 February 2026 03:38:21 +0000 (0:00:00.831) 0:00:00.831 **** 2026-02-18 03:38:35.557836 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:38:35.557848 | orchestrator | 2026-02-18 03:38:35.557858 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 03:38:35.557869 | orchestrator | Wednesday 18 February 2026 03:38:23 +0000 (0:00:01.293) 0:00:02.125 **** 2026-02-18 03:38:35.557880 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.557891 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.557901 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.557912 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.557922 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.557933 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.557944 | orchestrator | 2026-02-18 03:38:35.557955 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 03:38:35.557966 | orchestrator | Wednesday 18 February 2026 03:38:24 +0000 (0:00:01.328) 0:00:03.453 **** 2026-02-18 03:38:35.557992 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.558003 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.558014 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.558088 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.558099 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.558109 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.558120 | orchestrator | 2026-02-18 03:38:35.558131 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 03:38:35.558142 | orchestrator | Wednesday 18 February 2026 03:38:25 +0000 (0:00:00.826) 0:00:04.279 **** 2026-02-18 03:38:35.558153 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.558165 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.558177 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.558189 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.558223 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.558236 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.558248 | orchestrator | 2026-02-18 03:38:35.558261 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 03:38:35.558273 | orchestrator | Wednesday 18 February 2026 03:38:26 +0000 (0:00:00.994) 0:00:05.273 **** 2026-02-18 03:38:35.558285 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.558298 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.558309 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.558322 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.558334 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.558346 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.558358 | orchestrator | 2026-02-18 03:38:35.558401 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 03:38:35.558414 | orchestrator | Wednesday 18 February 2026 03:38:27 +0000 (0:00:00.881) 0:00:06.154 **** 2026-02-18 03:38:35.558427 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.558439 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.558450 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.558462 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.558475 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.558487 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.558499 | orchestrator | 2026-02-18 03:38:35.558512 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 03:38:35.558524 | orchestrator | Wednesday 18 February 2026 03:38:27 +0000 (0:00:00.624) 0:00:06.779 **** 2026-02-18 03:38:35.558534 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.558545 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.558555 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.558566 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.558576 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.558586 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.558597 | orchestrator | 2026-02-18 03:38:35.558608 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 03:38:35.558618 | orchestrator | Wednesday 18 February 2026 03:38:28 +0000 (0:00:00.887) 0:00:07.666 **** 2026-02-18 03:38:35.558629 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:35.558641 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:35.558651 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:35.558662 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:35.558673 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:35.558683 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:35.558694 | orchestrator | 2026-02-18 03:38:35.558704 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 03:38:35.558715 | orchestrator | Wednesday 18 February 2026 03:38:29 +0000 (0:00:00.645) 0:00:08.312 **** 2026-02-18 03:38:35.558726 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.558736 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.558747 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.558757 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.558782 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.558793 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.558803 | orchestrator | 2026-02-18 03:38:35.558814 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 03:38:35.558824 | orchestrator | Wednesday 18 February 2026 03:38:30 +0000 (0:00:00.840) 0:00:09.152 **** 2026-02-18 03:38:35.558835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:38:35.558846 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:38:35.558856 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:38:35.558867 | orchestrator | 2026-02-18 03:38:35.558877 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 03:38:35.558888 | orchestrator | Wednesday 18 February 2026 03:38:30 +0000 (0:00:00.694) 0:00:09.847 **** 2026-02-18 03:38:35.558906 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:35.558917 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:35.558928 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:35.558957 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:35.558969 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:35.558980 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:35.558990 | orchestrator | 2026-02-18 03:38:35.559001 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 03:38:35.559012 | orchestrator | Wednesday 18 February 2026 03:38:31 +0000 (0:00:00.774) 0:00:10.621 **** 2026-02-18 03:38:35.559023 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:38:35.559033 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:38:35.559044 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:38:35.559055 | orchestrator | 2026-02-18 03:38:35.559066 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 03:38:35.559076 | orchestrator | Wednesday 18 February 2026 03:38:34 +0000 (0:00:02.529) 0:00:13.151 **** 2026-02-18 03:38:35.559087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 03:38:35.559098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 03:38:35.559109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 03:38:35.559124 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:35.559148 | orchestrator | 2026-02-18 03:38:35.559177 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 03:38:35.559197 | orchestrator | Wednesday 18 February 2026 03:38:34 +0000 (0:00:00.415) 0:00:13.567 **** 2026-02-18 03:38:35.559218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 03:38:35.559238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 03:38:35.559255 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 03:38:35.559274 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:35.559293 | orchestrator | 2026-02-18 03:38:35.559311 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 03:38:35.559328 | orchestrator | Wednesday 18 February 2026 03:38:35 +0000 (0:00:00.633) 0:00:14.200 **** 2026-02-18 03:38:35.559348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:35.559440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:35.559458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:35.559489 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:35.559508 | orchestrator | 2026-02-18 03:38:35.559534 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 03:38:35.559552 | orchestrator | Wednesday 18 February 2026 03:38:35 +0000 (0:00:00.181) 0:00:14.382 **** 2026-02-18 03:38:35.559587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 03:38:32.514485', 'end': '2026-02-18 03:38:32.554710', 'delta': '0:00:00.040225', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 03:38:45.753648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 03:38:33.151628', 'end': '2026-02-18 03:38:33.205664', 'delta': '0:00:00.054036', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 03:38:45.753825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 03:38:33.695557', 'end': '2026-02-18 03:38:33.744067', 'delta': '0:00:00.048510', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 03:38:45.753852 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.753867 | orchestrator | 2026-02-18 03:38:45.753881 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 03:38:45.753893 | orchestrator | Wednesday 18 February 2026 03:38:35 +0000 (0:00:00.204) 0:00:14.587 **** 2026-02-18 03:38:45.753904 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:38:45.753917 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:38:45.753927 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:38:45.753938 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:38:45.753948 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:38:45.753959 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:38:45.753970 | orchestrator | 2026-02-18 03:38:45.753981 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 03:38:45.753991 | orchestrator | Wednesday 18 February 2026 03:38:36 +0000 (0:00:00.732) 0:00:15.319 **** 2026-02-18 03:38:45.754002 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:38:45.754014 | orchestrator | 2026-02-18 03:38:45.754124 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 03:38:45.754137 | orchestrator | Wednesday 18 February 2026 03:38:37 +0000 (0:00:00.848) 0:00:16.168 **** 2026-02-18 03:38:45.754195 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754210 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.754223 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.754236 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.754249 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.754262 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.754276 | orchestrator | 2026-02-18 03:38:45.754288 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 03:38:45.754303 | orchestrator | Wednesday 18 February 2026 03:38:37 +0000 (0:00:00.844) 0:00:17.012 **** 2026-02-18 03:38:45.754316 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754372 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.754385 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.754397 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.754407 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.754418 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.754429 | orchestrator | 2026-02-18 03:38:45.754440 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 03:38:45.754451 | orchestrator | Wednesday 18 February 2026 03:38:39 +0000 (0:00:01.259) 0:00:18.271 **** 2026-02-18 03:38:45.754462 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754472 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.754483 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.754493 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.754504 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.754534 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.754546 | orchestrator | 2026-02-18 03:38:45.754557 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 03:38:45.754567 | orchestrator | Wednesday 18 February 2026 03:38:39 +0000 (0:00:00.632) 0:00:18.904 **** 2026-02-18 03:38:45.754578 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754588 | orchestrator | 2026-02-18 03:38:45.754599 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 03:38:45.754610 | orchestrator | Wednesday 18 February 2026 03:38:39 +0000 (0:00:00.125) 0:00:19.029 **** 2026-02-18 03:38:45.754621 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754631 | orchestrator | 2026-02-18 03:38:45.754642 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 03:38:45.754653 | orchestrator | Wednesday 18 February 2026 03:38:40 +0000 (0:00:00.247) 0:00:19.276 **** 2026-02-18 03:38:45.754663 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754674 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.754684 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.754695 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.754706 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.754718 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.754728 | orchestrator | 2026-02-18 03:38:45.754763 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 03:38:45.754775 | orchestrator | Wednesday 18 February 2026 03:38:41 +0000 (0:00:00.835) 0:00:20.112 **** 2026-02-18 03:38:45.754786 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754796 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.754807 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.754817 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.754828 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.754839 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.754849 | orchestrator | 2026-02-18 03:38:45.754863 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 03:38:45.754882 | orchestrator | Wednesday 18 February 2026 03:38:41 +0000 (0:00:00.621) 0:00:20.733 **** 2026-02-18 03:38:45.754900 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.754927 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.754946 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.754979 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.754997 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.755015 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.755032 | orchestrator | 2026-02-18 03:38:45.755050 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 03:38:45.755069 | orchestrator | Wednesday 18 February 2026 03:38:42 +0000 (0:00:00.897) 0:00:21.631 **** 2026-02-18 03:38:45.755086 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.755104 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.755121 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.755139 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.755158 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.755174 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.755191 | orchestrator | 2026-02-18 03:38:45.755209 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 03:38:45.755228 | orchestrator | Wednesday 18 February 2026 03:38:43 +0000 (0:00:00.673) 0:00:22.305 **** 2026-02-18 03:38:45.755247 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.755265 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.755283 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.755301 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.755317 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.755366 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.755384 | orchestrator | 2026-02-18 03:38:45.755402 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 03:38:45.755422 | orchestrator | Wednesday 18 February 2026 03:38:44 +0000 (0:00:00.817) 0:00:23.122 **** 2026-02-18 03:38:45.755440 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.755459 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.755477 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.755494 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.755513 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.755530 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.755547 | orchestrator | 2026-02-18 03:38:45.755564 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 03:38:45.755584 | orchestrator | Wednesday 18 February 2026 03:38:44 +0000 (0:00:00.639) 0:00:23.761 **** 2026-02-18 03:38:45.755602 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:45.755621 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:45.755639 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:45.755657 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:45.755676 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:45.755695 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:45.755712 | orchestrator | 2026-02-18 03:38:45.755731 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 03:38:45.755750 | orchestrator | Wednesday 18 February 2026 03:38:45 +0000 (0:00:00.876) 0:00:24.637 **** 2026-02-18 03:38:45.755772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.755809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.755863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:45.916889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:45.916939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:45.916961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:45.916982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:45.917024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:45.917060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011754 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:46.011766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.011899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.011913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.011938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.011988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.143037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.143188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.143486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.143528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.143572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.470989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.471098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.471140 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:46.471173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.471438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.471450 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:46.471462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.471493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.712821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.712974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.712993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.713099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.713113 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:46.713126 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:46.713137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.713231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:38:46.936571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.936701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:38:46.936727 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:38:46.936748 | orchestrator | 2026-02-18 03:38:46.936768 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 03:38:46.936788 | orchestrator | Wednesday 18 February 2026 03:38:46 +0000 (0:00:01.105) 0:00:25.743 **** 2026-02-18 03:38:46.936808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936886 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:46.936999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041579 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.041609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.585964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586212 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586278 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.586386 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:38:47.586411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766983 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.766994 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:38:47.767007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.767017 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.767046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874433 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874575 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874604 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874649 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874702 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874744 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874805 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874842 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874883 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:47.874917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039371 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039467 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039494 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:38:48.039505 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039528 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039536 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039543 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039550 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039562 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039574 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039581 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.039593 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286767 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286852 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:38:48.286867 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:38:48.286880 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286894 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286907 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286920 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286932 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286984 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.286993 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.287000 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.287010 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:38:48.287037 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:39:00.391828 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:00.391939 | orchestrator | 2026-02-18 03:39:00.391952 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 03:39:00.391962 | orchestrator | Wednesday 18 February 2026 03:38:48 +0000 (0:00:01.565) 0:00:27.309 **** 2026-02-18 03:39:00.391969 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:00.391976 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:00.391983 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:00.391990 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:39:00.391997 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:39:00.392004 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:39:00.392010 | orchestrator | 2026-02-18 03:39:00.392018 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 03:39:00.392025 | orchestrator | Wednesday 18 February 2026 03:38:49 +0000 (0:00:00.959) 0:00:28.268 **** 2026-02-18 03:39:00.392032 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:00.392039 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:00.392046 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:00.392052 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:39:00.392059 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:39:00.392066 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:39:00.392073 | orchestrator | 2026-02-18 03:39:00.392080 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 03:39:00.392087 | orchestrator | Wednesday 18 February 2026 03:38:50 +0000 (0:00:00.825) 0:00:29.093 **** 2026-02-18 03:39:00.392094 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.392101 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.392108 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.392115 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:00.392121 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:00.392128 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:00.392135 | orchestrator | 2026-02-18 03:39:00.392142 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 03:39:00.392150 | orchestrator | Wednesday 18 February 2026 03:38:50 +0000 (0:00:00.625) 0:00:29.719 **** 2026-02-18 03:39:00.392157 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.392164 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.392170 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.392177 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:00.392184 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:00.392190 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:00.392197 | orchestrator | 2026-02-18 03:39:00.392204 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 03:39:00.392212 | orchestrator | Wednesday 18 February 2026 03:38:51 +0000 (0:00:00.882) 0:00:30.602 **** 2026-02-18 03:39:00.392220 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.392227 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.392234 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.392355 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:00.392366 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:00.392374 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:00.392382 | orchestrator | 2026-02-18 03:39:00.392389 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 03:39:00.392397 | orchestrator | Wednesday 18 February 2026 03:38:52 +0000 (0:00:00.658) 0:00:31.260 **** 2026-02-18 03:39:00.392404 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.392411 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.392418 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.392425 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:00.392432 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:00.392439 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:00.392447 | orchestrator | 2026-02-18 03:39:00.392454 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 03:39:00.392461 | orchestrator | Wednesday 18 February 2026 03:38:53 +0000 (0:00:00.879) 0:00:32.140 **** 2026-02-18 03:39:00.392469 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-18 03:39:00.392478 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-18 03:39:00.392486 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-18 03:39:00.392494 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-18 03:39:00.392503 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-18 03:39:00.392510 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-18 03:39:00.392518 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-18 03:39:00.392525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 03:39:00.392533 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-18 03:39:00.392540 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-18 03:39:00.392547 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-18 03:39:00.392554 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-18 03:39:00.392562 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 03:39:00.392568 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 03:39:00.392574 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-18 03:39:00.392582 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 03:39:00.392589 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-18 03:39:00.392611 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 03:39:00.392618 | orchestrator | 2026-02-18 03:39:00.392625 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 03:39:00.392632 | orchestrator | Wednesday 18 February 2026 03:38:54 +0000 (0:00:01.624) 0:00:33.764 **** 2026-02-18 03:39:00.392639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 03:39:00.392647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 03:39:00.392654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 03:39:00.392662 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.392669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 03:39:00.392677 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 03:39:00.392684 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 03:39:00.392711 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.392719 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 03:39:00.392727 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 03:39:00.392734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 03:39:00.392742 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.392750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 03:39:00.392756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 03:39:00.392774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 03:39:00.392782 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:00.392790 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 03:39:00.392798 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 03:39:00.392806 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 03:39:00.392813 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:00.392821 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 03:39:00.392828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 03:39:00.392835 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 03:39:00.392843 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:00.392850 | orchestrator | 2026-02-18 03:39:00.392858 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 03:39:00.392866 | orchestrator | Wednesday 18 February 2026 03:38:55 +0000 (0:00:01.023) 0:00:34.788 **** 2026-02-18 03:39:00.392873 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:00.392880 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:00.392887 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:00.392895 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:39:00.392902 | orchestrator | 2026-02-18 03:39:00.392909 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 03:39:00.392918 | orchestrator | Wednesday 18 February 2026 03:38:56 +0000 (0:00:01.214) 0:00:36.003 **** 2026-02-18 03:39:00.392925 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.392932 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.392938 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.392945 | orchestrator | 2026-02-18 03:39:00.392953 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 03:39:00.392959 | orchestrator | Wednesday 18 February 2026 03:38:57 +0000 (0:00:00.357) 0:00:36.360 **** 2026-02-18 03:39:00.392967 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.392973 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.392980 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.392987 | orchestrator | 2026-02-18 03:39:00.392993 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 03:39:00.393001 | orchestrator | Wednesday 18 February 2026 03:38:57 +0000 (0:00:00.355) 0:00:36.716 **** 2026-02-18 03:39:00.393007 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.393014 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:00.393021 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:00.393028 | orchestrator | 2026-02-18 03:39:00.393035 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 03:39:00.393043 | orchestrator | Wednesday 18 February 2026 03:38:58 +0000 (0:00:00.368) 0:00:37.084 **** 2026-02-18 03:39:00.393049 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:00.393056 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:00.393063 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:00.393070 | orchestrator | 2026-02-18 03:39:00.393077 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 03:39:00.393083 | orchestrator | Wednesday 18 February 2026 03:38:58 +0000 (0:00:00.724) 0:00:37.809 **** 2026-02-18 03:39:00.393090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:39:00.393096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:39:00.393103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:39:00.393109 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.393115 | orchestrator | 2026-02-18 03:39:00.393121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 03:39:00.393137 | orchestrator | Wednesday 18 February 2026 03:38:59 +0000 (0:00:00.412) 0:00:38.221 **** 2026-02-18 03:39:00.393142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:39:00.393148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:39:00.393154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:39:00.393161 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.393168 | orchestrator | 2026-02-18 03:39:00.393174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 03:39:00.393180 | orchestrator | Wednesday 18 February 2026 03:38:59 +0000 (0:00:00.421) 0:00:38.643 **** 2026-02-18 03:39:00.393194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:39:00.393201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:39:00.393207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:39:00.393213 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:00.393219 | orchestrator | 2026-02-18 03:39:00.393226 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 03:39:00.393232 | orchestrator | Wednesday 18 February 2026 03:39:00 +0000 (0:00:00.402) 0:00:39.045 **** 2026-02-18 03:39:00.393238 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:00.393244 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:00.393250 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:00.393256 | orchestrator | 2026-02-18 03:39:00.393262 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 03:39:00.393296 | orchestrator | Wednesday 18 February 2026 03:39:00 +0000 (0:00:00.375) 0:00:39.420 **** 2026-02-18 03:39:20.573594 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 03:39:20.573693 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 03:39:20.573702 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 03:39:20.573707 | orchestrator | 2026-02-18 03:39:20.573713 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 03:39:20.573719 | orchestrator | Wednesday 18 February 2026 03:39:01 +0000 (0:00:01.078) 0:00:40.499 **** 2026-02-18 03:39:20.573724 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:39:20.573729 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:39:20.573734 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:39:20.573739 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 03:39:20.573743 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 03:39:20.573748 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 03:39:20.573752 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 03:39:20.573756 | orchestrator | 2026-02-18 03:39:20.573761 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 03:39:20.573765 | orchestrator | Wednesday 18 February 2026 03:39:02 +0000 (0:00:00.916) 0:00:41.416 **** 2026-02-18 03:39:20.573769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:39:20.573774 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:39:20.573778 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:39:20.573782 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 03:39:20.573786 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 03:39:20.573791 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 03:39:20.573795 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 03:39:20.573799 | orchestrator | 2026-02-18 03:39:20.573803 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 03:39:20.573822 | orchestrator | Wednesday 18 February 2026 03:39:04 +0000 (0:00:02.056) 0:00:43.472 **** 2026-02-18 03:39:20.573828 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:39:20.573833 | orchestrator | 2026-02-18 03:39:20.573838 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 03:39:20.573842 | orchestrator | Wednesday 18 February 2026 03:39:05 +0000 (0:00:01.288) 0:00:44.761 **** 2026-02-18 03:39:20.573847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:39:20.573851 | orchestrator | 2026-02-18 03:39:20.573856 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 03:39:20.573860 | orchestrator | Wednesday 18 February 2026 03:39:07 +0000 (0:00:01.285) 0:00:46.046 **** 2026-02-18 03:39:20.573865 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.573869 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:20.573874 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:20.573878 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:39:20.573883 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:39:20.573887 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:39:20.573891 | orchestrator | 2026-02-18 03:39:20.573896 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 03:39:20.573900 | orchestrator | Wednesday 18 February 2026 03:39:08 +0000 (0:00:01.202) 0:00:47.248 **** 2026-02-18 03:39:20.573904 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.573909 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.573913 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.573917 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.573922 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.573926 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.573930 | orchestrator | 2026-02-18 03:39:20.573935 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 03:39:20.573939 | orchestrator | Wednesday 18 February 2026 03:39:08 +0000 (0:00:00.687) 0:00:47.936 **** 2026-02-18 03:39:20.573943 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.573948 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.573952 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.573956 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.573960 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.573975 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.573980 | orchestrator | 2026-02-18 03:39:20.573984 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 03:39:20.573988 | orchestrator | Wednesday 18 February 2026 03:39:09 +0000 (0:00:00.889) 0:00:48.825 **** 2026-02-18 03:39:20.573993 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.573997 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574001 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574006 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.574010 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574052 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.574058 | orchestrator | 2026-02-18 03:39:20.574062 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 03:39:20.574066 | orchestrator | Wednesday 18 February 2026 03:39:10 +0000 (0:00:00.711) 0:00:49.536 **** 2026-02-18 03:39:20.574071 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.574075 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:20.574091 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:20.574096 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:39:20.574100 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:39:20.574104 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:39:20.574109 | orchestrator | 2026-02-18 03:39:20.574113 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 03:39:20.574123 | orchestrator | Wednesday 18 February 2026 03:39:11 +0000 (0:00:01.197) 0:00:50.734 **** 2026-02-18 03:39:20.574128 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.574132 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:20.574136 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:20.574141 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574145 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574149 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574154 | orchestrator | 2026-02-18 03:39:20.574159 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 03:39:20.574164 | orchestrator | Wednesday 18 February 2026 03:39:12 +0000 (0:00:00.697) 0:00:51.432 **** 2026-02-18 03:39:20.574169 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.574174 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:20.574179 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:20.574184 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574207 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574215 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574222 | orchestrator | 2026-02-18 03:39:20.574227 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 03:39:20.574232 | orchestrator | Wednesday 18 February 2026 03:39:13 +0000 (0:00:00.896) 0:00:52.328 **** 2026-02-18 03:39:20.574237 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.574242 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.574247 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.574252 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:39:20.574256 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:39:20.574261 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:39:20.574266 | orchestrator | 2026-02-18 03:39:20.574271 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 03:39:20.574276 | orchestrator | Wednesday 18 February 2026 03:39:14 +0000 (0:00:01.032) 0:00:53.360 **** 2026-02-18 03:39:20.574281 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.574286 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.574290 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.574294 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:39:20.574298 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:39:20.574303 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:39:20.574307 | orchestrator | 2026-02-18 03:39:20.574311 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 03:39:20.574315 | orchestrator | Wednesday 18 February 2026 03:39:15 +0000 (0:00:01.349) 0:00:54.710 **** 2026-02-18 03:39:20.574320 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.574324 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:20.574328 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:20.574333 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574337 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574341 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574347 | orchestrator | 2026-02-18 03:39:20.574354 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 03:39:20.574361 | orchestrator | Wednesday 18 February 2026 03:39:16 +0000 (0:00:00.602) 0:00:55.312 **** 2026-02-18 03:39:20.574368 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.574375 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:20.574382 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:20.574389 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:39:20.574396 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:39:20.574417 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:39:20.574422 | orchestrator | 2026-02-18 03:39:20.574426 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 03:39:20.574431 | orchestrator | Wednesday 18 February 2026 03:39:17 +0000 (0:00:00.849) 0:00:56.162 **** 2026-02-18 03:39:20.574442 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.574452 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.574456 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.574460 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574464 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574469 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574473 | orchestrator | 2026-02-18 03:39:20.574477 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 03:39:20.574482 | orchestrator | Wednesday 18 February 2026 03:39:17 +0000 (0:00:00.628) 0:00:56.791 **** 2026-02-18 03:39:20.574486 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.574490 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.574495 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.574499 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574503 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574507 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574512 | orchestrator | 2026-02-18 03:39:20.574517 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 03:39:20.574524 | orchestrator | Wednesday 18 February 2026 03:39:18 +0000 (0:00:00.977) 0:00:57.768 **** 2026-02-18 03:39:20.574531 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:39:20.574538 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:39:20.574545 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:39:20.574551 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574557 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574570 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574577 | orchestrator | 2026-02-18 03:39:20.574585 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 03:39:20.574592 | orchestrator | Wednesday 18 February 2026 03:39:19 +0000 (0:00:00.614) 0:00:58.383 **** 2026-02-18 03:39:20.574599 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.574606 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:39:20.574612 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:39:20.574619 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:39:20.574627 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:39:20.574631 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:39:20.574636 | orchestrator | 2026-02-18 03:39:20.574640 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 03:39:20.574644 | orchestrator | Wednesday 18 February 2026 03:39:20 +0000 (0:00:00.913) 0:00:59.296 **** 2026-02-18 03:39:20.574648 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:39:20.574658 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.082760 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.082866 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:33.082877 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:33.082885 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:33.082892 | orchestrator | 2026-02-18 03:40:33.082899 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 03:40:33.082909 | orchestrator | Wednesday 18 February 2026 03:39:20 +0000 (0:00:00.634) 0:00:59.930 **** 2026-02-18 03:40:33.082916 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:33.082922 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.082929 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.082960 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:40:33.082969 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:40:33.082976 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:40:33.082982 | orchestrator | 2026-02-18 03:40:33.082989 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 03:40:33.082996 | orchestrator | Wednesday 18 February 2026 03:39:21 +0000 (0:00:00.884) 0:01:00.815 **** 2026-02-18 03:40:33.083003 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:40:33.083010 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:40:33.083016 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:40:33.083023 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:40:33.083030 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:40:33.083037 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:40:33.083072 | orchestrator | 2026-02-18 03:40:33.083083 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 03:40:33.083094 | orchestrator | Wednesday 18 February 2026 03:39:22 +0000 (0:00:00.655) 0:01:01.471 **** 2026-02-18 03:40:33.083104 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:40:33.083114 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:40:33.083124 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:40:33.083134 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:40:33.083145 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:40:33.083156 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:40:33.083168 | orchestrator | 2026-02-18 03:40:33.083176 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 03:40:33.083182 | orchestrator | Wednesday 18 February 2026 03:39:23 +0000 (0:00:01.355) 0:01:02.826 **** 2026-02-18 03:40:33.083189 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:40:33.083196 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:40:33.083203 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:40:33.083209 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:40:33.083216 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:40:33.083223 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:40:33.083229 | orchestrator | 2026-02-18 03:40:33.083236 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 03:40:33.083243 | orchestrator | Wednesday 18 February 2026 03:39:25 +0000 (0:00:01.690) 0:01:04.517 **** 2026-02-18 03:40:33.083249 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:40:33.083256 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:40:33.083262 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:40:33.083269 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:40:33.083275 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:40:33.083282 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:40:33.083289 | orchestrator | 2026-02-18 03:40:33.083296 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 03:40:33.083302 | orchestrator | Wednesday 18 February 2026 03:39:27 +0000 (0:00:01.950) 0:01:06.468 **** 2026-02-18 03:40:33.083311 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:40:33.083321 | orchestrator | 2026-02-18 03:40:33.083329 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 03:40:33.083337 | orchestrator | Wednesday 18 February 2026 03:39:28 +0000 (0:00:01.549) 0:01:08.017 **** 2026-02-18 03:40:33.083345 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:33.083352 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.083360 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.083368 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:33.083376 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:33.083383 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:33.083391 | orchestrator | 2026-02-18 03:40:33.083398 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 03:40:33.083406 | orchestrator | Wednesday 18 February 2026 03:39:29 +0000 (0:00:00.665) 0:01:08.683 **** 2026-02-18 03:40:33.083414 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:33.083422 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.083429 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.083437 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:33.083445 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:33.083453 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:33.083460 | orchestrator | 2026-02-18 03:40:33.083468 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 03:40:33.083475 | orchestrator | Wednesday 18 February 2026 03:39:30 +0000 (0:00:00.874) 0:01:09.557 **** 2026-02-18 03:40:33.083483 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 03:40:33.083504 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 03:40:33.083519 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 03:40:33.083527 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 03:40:33.083534 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 03:40:33.083543 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 03:40:33.083551 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 03:40:33.083559 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 03:40:33.083567 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 03:40:33.083590 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 03:40:33.083598 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 03:40:33.083606 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 03:40:33.083614 | orchestrator | 2026-02-18 03:40:33.083621 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 03:40:33.083629 | orchestrator | Wednesday 18 February 2026 03:39:31 +0000 (0:00:01.232) 0:01:10.789 **** 2026-02-18 03:40:33.083636 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:40:33.083645 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:40:33.083652 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:40:33.083660 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:40:33.083668 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:40:33.083676 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:40:33.083683 | orchestrator | 2026-02-18 03:40:33.083691 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 03:40:33.083699 | orchestrator | Wednesday 18 February 2026 03:39:32 +0000 (0:00:01.151) 0:01:11.941 **** 2026-02-18 03:40:33.083706 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:33.083714 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.083723 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.083730 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:33.083737 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:33.083744 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:33.083750 | orchestrator | 2026-02-18 03:40:33.083757 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 03:40:33.083764 | orchestrator | Wednesday 18 February 2026 03:39:33 +0000 (0:00:00.627) 0:01:12.568 **** 2026-02-18 03:40:33.083770 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:33.083777 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.083783 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.083790 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:33.083796 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:33.083803 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:33.083810 | orchestrator | 2026-02-18 03:40:33.083816 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 03:40:33.083823 | orchestrator | Wednesday 18 February 2026 03:39:34 +0000 (0:00:00.924) 0:01:13.492 **** 2026-02-18 03:40:33.083830 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:33.083836 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.083843 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.083849 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:33.083856 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:33.083862 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:33.083869 | orchestrator | 2026-02-18 03:40:33.083876 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 03:40:33.083882 | orchestrator | Wednesday 18 February 2026 03:39:35 +0000 (0:00:00.675) 0:01:14.168 **** 2026-02-18 03:40:33.083894 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:40:33.083901 | orchestrator | 2026-02-18 03:40:33.083908 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 03:40:33.083914 | orchestrator | Wednesday 18 February 2026 03:39:36 +0000 (0:00:01.323) 0:01:15.491 **** 2026-02-18 03:40:33.083921 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:40:33.083928 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:40:33.083961 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:40:33.083975 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:40:33.083982 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:40:33.083988 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:40:33.083995 | orchestrator | 2026-02-18 03:40:33.084002 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 03:40:33.084008 | orchestrator | Wednesday 18 February 2026 03:40:32 +0000 (0:00:55.933) 0:02:11.425 **** 2026-02-18 03:40:33.084015 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 03:40:33.084022 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 03:40:33.084029 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 03:40:33.084035 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:33.084042 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 03:40:33.084048 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 03:40:33.084055 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 03:40:33.084062 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:33.084068 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 03:40:33.084075 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 03:40:33.084085 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 03:40:33.084092 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:33.084099 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 03:40:33.084105 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 03:40:33.084112 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 03:40:33.084118 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:33.084125 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 03:40:33.084132 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 03:40:33.084138 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 03:40:33.084150 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.765700 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 03:40:57.765838 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 03:40:57.765932 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 03:40:57.765946 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.765958 | orchestrator | 2026-02-18 03:40:57.765969 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 03:40:57.765978 | orchestrator | Wednesday 18 February 2026 03:40:33 +0000 (0:00:00.693) 0:02:12.118 **** 2026-02-18 03:40:57.765988 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.765998 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.766008 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.766094 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.766111 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.766155 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.766259 | orchestrator | 2026-02-18 03:40:57.766281 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 03:40:57.766299 | orchestrator | Wednesday 18 February 2026 03:40:33 +0000 (0:00:00.856) 0:02:12.974 **** 2026-02-18 03:40:57.766316 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.766333 | orchestrator | 2026-02-18 03:40:57.766349 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 03:40:57.766366 | orchestrator | Wednesday 18 February 2026 03:40:34 +0000 (0:00:00.185) 0:02:13.160 **** 2026-02-18 03:40:57.766381 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.766399 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.766417 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.766446 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.766462 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.766477 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.766493 | orchestrator | 2026-02-18 03:40:57.766510 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 03:40:57.766527 | orchestrator | Wednesday 18 February 2026 03:40:34 +0000 (0:00:00.674) 0:02:13.834 **** 2026-02-18 03:40:57.766545 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.766562 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.766580 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.766596 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.766614 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.766625 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.766634 | orchestrator | 2026-02-18 03:40:57.766644 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 03:40:57.766654 | orchestrator | Wednesday 18 February 2026 03:40:35 +0000 (0:00:00.880) 0:02:14.715 **** 2026-02-18 03:40:57.766663 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.766673 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.766682 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.766693 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.766702 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.766712 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.766721 | orchestrator | 2026-02-18 03:40:57.766731 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 03:40:57.766740 | orchestrator | Wednesday 18 February 2026 03:40:36 +0000 (0:00:00.682) 0:02:15.398 **** 2026-02-18 03:40:57.766750 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:40:57.766760 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:40:57.766770 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:40:57.766779 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:40:57.766789 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:40:57.766798 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:40:57.766807 | orchestrator | 2026-02-18 03:40:57.766817 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 03:40:57.766828 | orchestrator | Wednesday 18 February 2026 03:40:39 +0000 (0:00:03.612) 0:02:19.011 **** 2026-02-18 03:40:57.766837 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:40:57.766846 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:40:57.766878 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:40:57.766890 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:40:57.766900 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:40:57.766909 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:40:57.766919 | orchestrator | 2026-02-18 03:40:57.766928 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 03:40:57.766938 | orchestrator | Wednesday 18 February 2026 03:40:40 +0000 (0:00:00.631) 0:02:19.642 **** 2026-02-18 03:40:57.766949 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:40:57.766960 | orchestrator | 2026-02-18 03:40:57.766969 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 03:40:57.766995 | orchestrator | Wednesday 18 February 2026 03:40:41 +0000 (0:00:01.325) 0:02:20.968 **** 2026-02-18 03:40:57.767005 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767015 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767025 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767034 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767057 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767067 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767076 | orchestrator | 2026-02-18 03:40:57.767086 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 03:40:57.767096 | orchestrator | Wednesday 18 February 2026 03:40:42 +0000 (0:00:00.969) 0:02:21.938 **** 2026-02-18 03:40:57.767105 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767115 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767124 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767133 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767143 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767152 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767162 | orchestrator | 2026-02-18 03:40:57.767171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 03:40:57.767181 | orchestrator | Wednesday 18 February 2026 03:40:43 +0000 (0:00:00.661) 0:02:22.599 **** 2026-02-18 03:40:57.767191 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767225 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767235 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767245 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767254 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767264 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767273 | orchestrator | 2026-02-18 03:40:57.767283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 03:40:57.767292 | orchestrator | Wednesday 18 February 2026 03:40:44 +0000 (0:00:00.892) 0:02:23.492 **** 2026-02-18 03:40:57.767302 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767311 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767320 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767330 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767339 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767348 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767358 | orchestrator | 2026-02-18 03:40:57.767367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 03:40:57.767377 | orchestrator | Wednesday 18 February 2026 03:40:45 +0000 (0:00:00.663) 0:02:24.155 **** 2026-02-18 03:40:57.767386 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767396 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767405 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767415 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767446 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767455 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767465 | orchestrator | 2026-02-18 03:40:57.767475 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 03:40:57.767492 | orchestrator | Wednesday 18 February 2026 03:40:46 +0000 (0:00:00.925) 0:02:25.081 **** 2026-02-18 03:40:57.767509 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767526 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767544 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767561 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767578 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767592 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767608 | orchestrator | 2026-02-18 03:40:57.767625 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 03:40:57.767642 | orchestrator | Wednesday 18 February 2026 03:40:46 +0000 (0:00:00.674) 0:02:25.755 **** 2026-02-18 03:40:57.767674 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767692 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767710 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767728 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767746 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767763 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767777 | orchestrator | 2026-02-18 03:40:57.767787 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 03:40:57.767797 | orchestrator | Wednesday 18 February 2026 03:40:47 +0000 (0:00:00.929) 0:02:26.684 **** 2026-02-18 03:40:57.767806 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:40:57.767816 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:40:57.767825 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:40:57.767835 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:40:57.767844 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:40:57.767853 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:40:57.767889 | orchestrator | 2026-02-18 03:40:57.767899 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 03:40:57.767908 | orchestrator | Wednesday 18 February 2026 03:40:48 +0000 (0:00:00.739) 0:02:27.424 **** 2026-02-18 03:40:57.767918 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:40:57.767928 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:40:57.767937 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:40:57.767947 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:40:57.767956 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:40:57.767966 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:40:57.767975 | orchestrator | 2026-02-18 03:40:57.767985 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 03:40:57.767994 | orchestrator | Wednesday 18 February 2026 03:40:49 +0000 (0:00:01.357) 0:02:28.782 **** 2026-02-18 03:40:57.768005 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:40:57.768016 | orchestrator | 2026-02-18 03:40:57.768026 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 03:40:57.768036 | orchestrator | Wednesday 18 February 2026 03:40:51 +0000 (0:00:01.341) 0:02:30.124 **** 2026-02-18 03:40:57.768045 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-18 03:40:57.768055 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-18 03:40:57.768065 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-18 03:40:57.768074 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-18 03:40:57.768084 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-18 03:40:57.768093 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-18 03:40:57.768103 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-18 03:40:57.768121 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-18 03:40:57.768131 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-18 03:40:57.768140 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-18 03:40:57.768149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-18 03:40:57.768159 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-18 03:40:57.768168 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-18 03:40:57.768178 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-18 03:40:57.768187 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-18 03:40:57.768196 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-18 03:40:57.768207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-18 03:40:57.768226 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-18 03:41:03.293658 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-18 03:41:03.293777 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-18 03:41:03.293814 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-18 03:41:03.293824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-18 03:41:03.293833 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-18 03:41:03.293908 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-18 03:41:03.293918 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-18 03:41:03.293927 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-18 03:41:03.293936 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-18 03:41:03.293944 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-18 03:41:03.293953 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-18 03:41:03.293961 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-18 03:41:03.293974 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-18 03:41:03.293990 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-18 03:41:03.294004 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-18 03:41:03.294088 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-18 03:41:03.294099 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-18 03:41:03.294109 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-18 03:41:03.294117 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-18 03:41:03.294126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-18 03:41:03.294134 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-18 03:41:03.294143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-18 03:41:03.294151 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-18 03:41:03.294160 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 03:41:03.294168 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-18 03:41:03.294177 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-18 03:41:03.294185 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-18 03:41:03.294193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-18 03:41:03.294202 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-18 03:41:03.294213 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 03:41:03.294223 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 03:41:03.294233 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-18 03:41:03.294243 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-18 03:41:03.294253 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 03:41:03.294263 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 03:41:03.294273 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 03:41:03.294283 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 03:41:03.294293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 03:41:03.294304 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 03:41:03.294313 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 03:41:03.294323 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 03:41:03.294333 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 03:41:03.294343 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 03:41:03.294363 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 03:41:03.294373 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 03:41:03.294382 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 03:41:03.294393 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 03:41:03.294404 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 03:41:03.294419 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 03:41:03.294448 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 03:41:03.294462 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 03:41:03.294476 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 03:41:03.294490 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 03:41:03.294504 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 03:41:03.294519 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 03:41:03.294535 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 03:41:03.294549 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 03:41:03.294563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 03:41:03.294590 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-18 03:41:03.294600 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 03:41:03.294622 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 03:41:03.294631 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 03:41:03.294640 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 03:41:03.294649 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 03:41:03.294657 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-18 03:41:03.294666 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 03:41:03.294674 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-18 03:41:03.294683 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 03:41:03.294692 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 03:41:03.294701 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-18 03:41:03.294710 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-18 03:41:03.294719 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-18 03:41:03.294727 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-18 03:41:03.294736 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-18 03:41:03.294745 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-18 03:41:03.294753 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-18 03:41:03.294762 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-18 03:41:03.294771 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-18 03:41:03.294779 | orchestrator | 2026-02-18 03:41:03.294789 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 03:41:03.294797 | orchestrator | Wednesday 18 February 2026 03:40:57 +0000 (0:00:06.664) 0:02:36.788 **** 2026-02-18 03:41:03.294806 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:03.294815 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:03.294824 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:03.294833 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:41:03.294877 | orchestrator | 2026-02-18 03:41:03.294886 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 03:41:03.294895 | orchestrator | Wednesday 18 February 2026 03:40:58 +0000 (0:00:01.114) 0:02:37.903 **** 2026-02-18 03:41:03.294904 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:03.294913 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:03.294922 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:03.294930 | orchestrator | 2026-02-18 03:41:03.294939 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 03:41:03.294947 | orchestrator | Wednesday 18 February 2026 03:40:59 +0000 (0:00:00.753) 0:02:38.657 **** 2026-02-18 03:41:03.294956 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:03.294965 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:03.294973 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:03.294982 | orchestrator | 2026-02-18 03:41:03.294990 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 03:41:03.294999 | orchestrator | Wednesday 18 February 2026 03:41:00 +0000 (0:00:01.234) 0:02:39.892 **** 2026-02-18 03:41:03.295009 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:03.295024 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:03.295038 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:03.295079 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:03.295096 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:03.295111 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:03.295126 | orchestrator | 2026-02-18 03:41:03.295136 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 03:41:03.295151 | orchestrator | Wednesday 18 February 2026 03:41:01 +0000 (0:00:00.879) 0:02:40.771 **** 2026-02-18 03:41:03.295160 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:03.295168 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:03.295177 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:03.295185 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:03.295194 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:03.295202 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:03.295211 | orchestrator | 2026-02-18 03:41:03.295219 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 03:41:03.295228 | orchestrator | Wednesday 18 February 2026 03:41:02 +0000 (0:00:00.678) 0:02:41.449 **** 2026-02-18 03:41:03.295236 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:03.295245 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:03.295253 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:03.295262 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:03.295270 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:03.295279 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:03.295287 | orchestrator | 2026-02-18 03:41:03.295303 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 03:41:17.294466 | orchestrator | Wednesday 18 February 2026 03:41:03 +0000 (0:00:00.879) 0:02:42.329 **** 2026-02-18 03:41:17.294630 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.294652 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.294665 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.294677 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.294689 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.294729 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.294742 | orchestrator | 2026-02-18 03:41:17.294755 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 03:41:17.294767 | orchestrator | Wednesday 18 February 2026 03:41:03 +0000 (0:00:00.624) 0:02:42.953 **** 2026-02-18 03:41:17.294779 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.294790 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.294828 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.294835 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.294843 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.294854 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.294865 | orchestrator | 2026-02-18 03:41:17.294877 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 03:41:17.294891 | orchestrator | Wednesday 18 February 2026 03:41:04 +0000 (0:00:00.932) 0:02:43.885 **** 2026-02-18 03:41:17.294900 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.294909 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.294919 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.294928 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.294938 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.294948 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.294957 | orchestrator | 2026-02-18 03:41:17.294968 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 03:41:17.294978 | orchestrator | Wednesday 18 February 2026 03:41:05 +0000 (0:00:00.648) 0:02:44.534 **** 2026-02-18 03:41:17.294988 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.294999 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.295009 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.295019 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295030 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295040 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295050 | orchestrator | 2026-02-18 03:41:17.295062 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 03:41:17.295072 | orchestrator | Wednesday 18 February 2026 03:41:06 +0000 (0:00:00.925) 0:02:45.459 **** 2026-02-18 03:41:17.295084 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.295096 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.295107 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.295118 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295129 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295140 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295151 | orchestrator | 2026-02-18 03:41:17.295159 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 03:41:17.295166 | orchestrator | Wednesday 18 February 2026 03:41:07 +0000 (0:00:00.641) 0:02:46.100 **** 2026-02-18 03:41:17.295173 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295180 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295187 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295193 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:17.295202 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:17.295209 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:17.295215 | orchestrator | 2026-02-18 03:41:17.295222 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 03:41:17.295228 | orchestrator | Wednesday 18 February 2026 03:41:09 +0000 (0:00:02.917) 0:02:49.018 **** 2026-02-18 03:41:17.295235 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:17.295241 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:17.295248 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:17.295254 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295261 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295267 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295274 | orchestrator | 2026-02-18 03:41:17.295280 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 03:41:17.295297 | orchestrator | Wednesday 18 February 2026 03:41:10 +0000 (0:00:00.698) 0:02:49.716 **** 2026-02-18 03:41:17.295304 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:17.295311 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:17.295317 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:17.295324 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295330 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295337 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295343 | orchestrator | 2026-02-18 03:41:17.295350 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 03:41:17.295356 | orchestrator | Wednesday 18 February 2026 03:41:11 +0000 (0:00:00.881) 0:02:50.598 **** 2026-02-18 03:41:17.295363 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.295370 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.295393 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.295400 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295406 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295413 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295420 | orchestrator | 2026-02-18 03:41:17.295427 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 03:41:17.295438 | orchestrator | Wednesday 18 February 2026 03:41:12 +0000 (0:00:00.671) 0:02:51.269 **** 2026-02-18 03:41:17.295449 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:17.295462 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:17.295471 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 03:41:17.295480 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295516 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295530 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295541 | orchestrator | 2026-02-18 03:41:17.295551 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 03:41:17.295560 | orchestrator | Wednesday 18 February 2026 03:41:13 +0000 (0:00:00.966) 0:02:52.235 **** 2026-02-18 03:41:17.295573 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-18 03:41:17.295588 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-18 03:41:17.295599 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.295609 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-18 03:41:17.295620 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-18 03:41:17.295630 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.295639 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-18 03:41:17.295658 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-18 03:41:17.295670 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.295680 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295689 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295699 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295708 | orchestrator | 2026-02-18 03:41:17.295719 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 03:41:17.295732 | orchestrator | Wednesday 18 February 2026 03:41:13 +0000 (0:00:00.670) 0:02:52.906 **** 2026-02-18 03:41:17.295744 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.295754 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.295763 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.295772 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295783 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.295793 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.295945 | orchestrator | 2026-02-18 03:41:17.295957 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 03:41:17.295963 | orchestrator | Wednesday 18 February 2026 03:41:14 +0000 (0:00:00.920) 0:02:53.826 **** 2026-02-18 03:41:17.295970 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.295976 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.295983 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.295989 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.295996 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.296002 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.296009 | orchestrator | 2026-02-18 03:41:17.296015 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 03:41:17.296031 | orchestrator | Wednesday 18 February 2026 03:41:15 +0000 (0:00:00.628) 0:02:54.454 **** 2026-02-18 03:41:17.296038 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.296044 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.296051 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.296057 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.296064 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.296070 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.296077 | orchestrator | 2026-02-18 03:41:17.296083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 03:41:17.296090 | orchestrator | Wednesday 18 February 2026 03:41:16 +0000 (0:00:00.974) 0:02:55.429 **** 2026-02-18 03:41:17.296096 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:17.296103 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:17.296109 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:17.296116 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:17.296122 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:17.296129 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:17.296135 | orchestrator | 2026-02-18 03:41:17.296142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 03:41:17.296160 | orchestrator | Wednesday 18 February 2026 03:41:17 +0000 (0:00:00.889) 0:02:56.319 **** 2026-02-18 03:41:35.967835 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.967977 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:35.968005 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:35.968026 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:35.968045 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:35.968064 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:35.968100 | orchestrator | 2026-02-18 03:41:35.968118 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 03:41:35.968139 | orchestrator | Wednesday 18 February 2026 03:41:17 +0000 (0:00:00.699) 0:02:57.018 **** 2026-02-18 03:41:35.968157 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:35.968176 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:35.968194 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:35.968211 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:35.968227 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:35.968244 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:35.968261 | orchestrator | 2026-02-18 03:41:35.968280 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 03:41:35.968298 | orchestrator | Wednesday 18 February 2026 03:41:18 +0000 (0:00:01.013) 0:02:58.032 **** 2026-02-18 03:41:35.968317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:41:35.968336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:41:35.968357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:41:35.968376 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.968398 | orchestrator | 2026-02-18 03:41:35.968423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 03:41:35.968448 | orchestrator | Wednesday 18 February 2026 03:41:19 +0000 (0:00:00.443) 0:02:58.476 **** 2026-02-18 03:41:35.968467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:41:35.968485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:41:35.968503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:41:35.968521 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.968539 | orchestrator | 2026-02-18 03:41:35.968558 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 03:41:35.968577 | orchestrator | Wednesday 18 February 2026 03:41:19 +0000 (0:00:00.451) 0:02:58.927 **** 2026-02-18 03:41:35.968595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:41:35.968614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:41:35.968632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:41:35.968651 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.968662 | orchestrator | 2026-02-18 03:41:35.968672 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 03:41:35.968683 | orchestrator | Wednesday 18 February 2026 03:41:20 +0000 (0:00:00.464) 0:02:59.392 **** 2026-02-18 03:41:35.968694 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:35.968705 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:35.968715 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:35.968726 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:35.968737 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:35.968779 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:35.968792 | orchestrator | 2026-02-18 03:41:35.968802 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 03:41:35.968813 | orchestrator | Wednesday 18 February 2026 03:41:21 +0000 (0:00:00.763) 0:03:00.155 **** 2026-02-18 03:41:35.968824 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 03:41:35.968834 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 03:41:35.968846 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 03:41:35.968856 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-18 03:41:35.968867 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:35.968877 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-18 03:41:35.968888 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:35.968899 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-18 03:41:35.968909 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:35.968920 | orchestrator | 2026-02-18 03:41:35.968931 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 03:41:35.968956 | orchestrator | Wednesday 18 February 2026 03:41:23 +0000 (0:00:01.936) 0:03:02.092 **** 2026-02-18 03:41:35.968967 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:41:35.968983 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:41:35.969001 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:41:35.969020 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:41:35.969039 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:41:35.969057 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:41:35.969072 | orchestrator | 2026-02-18 03:41:35.969082 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-18 03:41:35.969093 | orchestrator | Wednesday 18 February 2026 03:41:25 +0000 (0:00:02.757) 0:03:04.850 **** 2026-02-18 03:41:35.969103 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:41:35.969131 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:41:35.969142 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:41:35.969153 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:41:35.969164 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:41:35.969175 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:41:35.969185 | orchestrator | 2026-02-18 03:41:35.969196 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-18 03:41:35.969206 | orchestrator | Wednesday 18 February 2026 03:41:26 +0000 (0:00:01.051) 0:03:05.901 **** 2026-02-18 03:41:35.969217 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.969227 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:35.969238 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:35.969249 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:41:35.969260 | orchestrator | 2026-02-18 03:41:35.969271 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-18 03:41:35.969282 | orchestrator | Wednesday 18 February 2026 03:41:28 +0000 (0:00:01.150) 0:03:07.052 **** 2026-02-18 03:41:35.969292 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:41:35.969326 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:41:35.969338 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:41:35.969348 | orchestrator | 2026-02-18 03:41:35.969359 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-18 03:41:35.969370 | orchestrator | Wednesday 18 February 2026 03:41:28 +0000 (0:00:00.422) 0:03:07.474 **** 2026-02-18 03:41:35.969380 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:41:35.969390 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:41:35.969401 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:41:35.969412 | orchestrator | 2026-02-18 03:41:35.969422 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-18 03:41:35.969433 | orchestrator | Wednesday 18 February 2026 03:41:30 +0000 (0:00:01.576) 0:03:09.051 **** 2026-02-18 03:41:35.969444 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 03:41:35.969454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 03:41:35.969465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 03:41:35.969475 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:35.969486 | orchestrator | 2026-02-18 03:41:35.969496 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-18 03:41:35.969507 | orchestrator | Wednesday 18 February 2026 03:41:30 +0000 (0:00:00.697) 0:03:09.748 **** 2026-02-18 03:41:35.969519 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:41:35.969536 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:41:35.969555 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:41:35.969578 | orchestrator | 2026-02-18 03:41:35.969603 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-18 03:41:35.969621 | orchestrator | Wednesday 18 February 2026 03:41:31 +0000 (0:00:00.362) 0:03:10.111 **** 2026-02-18 03:41:35.969638 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:35.969655 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:35.969673 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:35.969703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:41:35.969721 | orchestrator | 2026-02-18 03:41:35.969740 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-18 03:41:35.969833 | orchestrator | Wednesday 18 February 2026 03:41:32 +0000 (0:00:01.192) 0:03:11.303 **** 2026-02-18 03:41:35.969851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:41:35.969868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:41:35.969879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:41:35.969890 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.969901 | orchestrator | 2026-02-18 03:41:35.969912 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-18 03:41:35.969922 | orchestrator | Wednesday 18 February 2026 03:41:32 +0000 (0:00:00.427) 0:03:11.730 **** 2026-02-18 03:41:35.969933 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.969944 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:35.969954 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:35.969965 | orchestrator | 2026-02-18 03:41:35.969975 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-18 03:41:35.969986 | orchestrator | Wednesday 18 February 2026 03:41:33 +0000 (0:00:00.352) 0:03:12.083 **** 2026-02-18 03:41:35.969997 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970007 | orchestrator | 2026-02-18 03:41:35.970090 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-18 03:41:35.970106 | orchestrator | Wednesday 18 February 2026 03:41:33 +0000 (0:00:00.239) 0:03:12.323 **** 2026-02-18 03:41:35.970116 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970129 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:35.970149 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:35.970175 | orchestrator | 2026-02-18 03:41:35.970199 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-18 03:41:35.970218 | orchestrator | Wednesday 18 February 2026 03:41:33 +0000 (0:00:00.335) 0:03:12.658 **** 2026-02-18 03:41:35.970237 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970254 | orchestrator | 2026-02-18 03:41:35.970274 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-18 03:41:35.970293 | orchestrator | Wednesday 18 February 2026 03:41:34 +0000 (0:00:00.768) 0:03:13.427 **** 2026-02-18 03:41:35.970312 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970331 | orchestrator | 2026-02-18 03:41:35.970352 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-18 03:41:35.970372 | orchestrator | Wednesday 18 February 2026 03:41:34 +0000 (0:00:00.256) 0:03:13.684 **** 2026-02-18 03:41:35.970388 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970399 | orchestrator | 2026-02-18 03:41:35.970410 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-18 03:41:35.970421 | orchestrator | Wednesday 18 February 2026 03:41:34 +0000 (0:00:00.138) 0:03:13.822 **** 2026-02-18 03:41:35.970441 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970452 | orchestrator | 2026-02-18 03:41:35.970463 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-18 03:41:35.970473 | orchestrator | Wednesday 18 February 2026 03:41:35 +0000 (0:00:00.253) 0:03:14.076 **** 2026-02-18 03:41:35.970484 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970495 | orchestrator | 2026-02-18 03:41:35.970505 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-18 03:41:35.970516 | orchestrator | Wednesday 18 February 2026 03:41:35 +0000 (0:00:00.281) 0:03:14.358 **** 2026-02-18 03:41:35.970527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:41:35.970537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:41:35.970548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:41:35.970569 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:35.970580 | orchestrator | 2026-02-18 03:41:35.970591 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-18 03:41:35.970602 | orchestrator | Wednesday 18 February 2026 03:41:35 +0000 (0:00:00.432) 0:03:14.790 **** 2026-02-18 03:41:35.970626 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:55.259038 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:55.259173 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:55.259182 | orchestrator | 2026-02-18 03:41:55.259190 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-18 03:41:55.259199 | orchestrator | Wednesday 18 February 2026 03:41:36 +0000 (0:00:00.337) 0:03:15.127 **** 2026-02-18 03:41:55.259205 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:55.259211 | orchestrator | 2026-02-18 03:41:55.259217 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-18 03:41:55.259224 | orchestrator | Wednesday 18 February 2026 03:41:36 +0000 (0:00:00.248) 0:03:15.376 **** 2026-02-18 03:41:55.259230 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:55.259236 | orchestrator | 2026-02-18 03:41:55.259242 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-18 03:41:55.259249 | orchestrator | Wednesday 18 February 2026 03:41:36 +0000 (0:00:00.239) 0:03:15.615 **** 2026-02-18 03:41:55.259255 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.259260 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.259266 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:55.259274 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:41:55.259280 | orchestrator | 2026-02-18 03:41:55.259286 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-18 03:41:55.259292 | orchestrator | Wednesday 18 February 2026 03:41:37 +0000 (0:00:01.151) 0:03:16.766 **** 2026-02-18 03:41:55.259298 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:55.259307 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:55.259314 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:55.259320 | orchestrator | 2026-02-18 03:41:55.259326 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-18 03:41:55.259333 | orchestrator | Wednesday 18 February 2026 03:41:38 +0000 (0:00:00.381) 0:03:17.147 **** 2026-02-18 03:41:55.259339 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:41:55.259345 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:41:55.259351 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:41:55.259357 | orchestrator | 2026-02-18 03:41:55.259363 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-18 03:41:55.259369 | orchestrator | Wednesday 18 February 2026 03:41:39 +0000 (0:00:01.592) 0:03:18.739 **** 2026-02-18 03:41:55.259376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:41:55.259382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:41:55.259389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:41:55.259394 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:55.259400 | orchestrator | 2026-02-18 03:41:55.259406 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-18 03:41:55.259412 | orchestrator | Wednesday 18 February 2026 03:41:40 +0000 (0:00:00.667) 0:03:19.407 **** 2026-02-18 03:41:55.259418 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:55.259424 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:55.259429 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:55.259435 | orchestrator | 2026-02-18 03:41:55.259443 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-18 03:41:55.259450 | orchestrator | Wednesday 18 February 2026 03:41:40 +0000 (0:00:00.375) 0:03:19.782 **** 2026-02-18 03:41:55.259456 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.259462 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.259468 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:55.259497 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:41:55.259503 | orchestrator | 2026-02-18 03:41:55.259509 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-18 03:41:55.259515 | orchestrator | Wednesday 18 February 2026 03:41:41 +0000 (0:00:01.134) 0:03:20.917 **** 2026-02-18 03:41:55.259521 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:55.259527 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:55.259533 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:55.259538 | orchestrator | 2026-02-18 03:41:55.259544 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-18 03:41:55.259550 | orchestrator | Wednesday 18 February 2026 03:41:42 +0000 (0:00:00.347) 0:03:21.264 **** 2026-02-18 03:41:55.259556 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:41:55.259563 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:41:55.259569 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:41:55.259575 | orchestrator | 2026-02-18 03:41:55.259582 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-18 03:41:55.259589 | orchestrator | Wednesday 18 February 2026 03:41:43 +0000 (0:00:01.245) 0:03:22.510 **** 2026-02-18 03:41:55.259596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:41:55.259619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:41:55.259626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:41:55.259632 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:55.259638 | orchestrator | 2026-02-18 03:41:55.259644 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-18 03:41:55.259650 | orchestrator | Wednesday 18 February 2026 03:41:44 +0000 (0:00:00.910) 0:03:23.421 **** 2026-02-18 03:41:55.259656 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:41:55.259662 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:41:55.259669 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:41:55.259675 | orchestrator | 2026-02-18 03:41:55.259681 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-18 03:41:55.259687 | orchestrator | Wednesday 18 February 2026 03:41:44 +0000 (0:00:00.570) 0:03:23.992 **** 2026-02-18 03:41:55.259693 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:55.259717 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:55.259724 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:55.259730 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.259735 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.259741 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:55.259746 | orchestrator | 2026-02-18 03:41:55.259771 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-18 03:41:55.259777 | orchestrator | Wednesday 18 February 2026 03:41:45 +0000 (0:00:00.666) 0:03:24.658 **** 2026-02-18 03:41:55.259782 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:41:55.259788 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:41:55.259794 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:41:55.259800 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:41:55.259806 | orchestrator | 2026-02-18 03:41:55.259812 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-18 03:41:55.259818 | orchestrator | Wednesday 18 February 2026 03:41:46 +0000 (0:00:01.159) 0:03:25.818 **** 2026-02-18 03:41:55.259824 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:41:55.259830 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:41:55.259836 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:41:55.259841 | orchestrator | 2026-02-18 03:41:55.259847 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-18 03:41:55.259853 | orchestrator | Wednesday 18 February 2026 03:41:47 +0000 (0:00:00.351) 0:03:26.169 **** 2026-02-18 03:41:55.259859 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:41:55.259872 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:41:55.259878 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:41:55.259884 | orchestrator | 2026-02-18 03:41:55.259889 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-18 03:41:55.259895 | orchestrator | Wednesday 18 February 2026 03:41:48 +0000 (0:00:01.248) 0:03:27.418 **** 2026-02-18 03:41:55.259902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 03:41:55.259908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 03:41:55.259914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 03:41:55.259920 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.259926 | orchestrator | 2026-02-18 03:41:55.259933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-18 03:41:55.259939 | orchestrator | Wednesday 18 February 2026 03:41:49 +0000 (0:00:01.139) 0:03:28.557 **** 2026-02-18 03:41:55.259946 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:41:55.259952 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:41:55.259958 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:41:55.259964 | orchestrator | 2026-02-18 03:41:55.259970 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-18 03:41:55.259976 | orchestrator | 2026-02-18 03:41:55.259983 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 03:41:55.259989 | orchestrator | Wednesday 18 February 2026 03:41:50 +0000 (0:00:00.618) 0:03:29.176 **** 2026-02-18 03:41:55.259996 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:41:55.260005 | orchestrator | 2026-02-18 03:41:55.260011 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 03:41:55.260017 | orchestrator | Wednesday 18 February 2026 03:41:50 +0000 (0:00:00.786) 0:03:29.962 **** 2026-02-18 03:41:55.260024 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:41:55.260031 | orchestrator | 2026-02-18 03:41:55.260037 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 03:41:55.260043 | orchestrator | Wednesday 18 February 2026 03:41:51 +0000 (0:00:00.596) 0:03:30.558 **** 2026-02-18 03:41:55.260049 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:41:55.260055 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:41:55.260060 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:41:55.260066 | orchestrator | 2026-02-18 03:41:55.260071 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 03:41:55.260077 | orchestrator | Wednesday 18 February 2026 03:41:52 +0000 (0:00:00.712) 0:03:31.270 **** 2026-02-18 03:41:55.260082 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.260088 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.260093 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:55.260099 | orchestrator | 2026-02-18 03:41:55.260105 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 03:41:55.260110 | orchestrator | Wednesday 18 February 2026 03:41:52 +0000 (0:00:00.634) 0:03:31.905 **** 2026-02-18 03:41:55.260116 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.260121 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.260127 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:55.260132 | orchestrator | 2026-02-18 03:41:55.260138 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 03:41:55.260143 | orchestrator | Wednesday 18 February 2026 03:41:53 +0000 (0:00:00.374) 0:03:32.280 **** 2026-02-18 03:41:55.260149 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.260154 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.260165 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:55.260170 | orchestrator | 2026-02-18 03:41:55.260176 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 03:41:55.260181 | orchestrator | Wednesday 18 February 2026 03:41:53 +0000 (0:00:00.338) 0:03:32.619 **** 2026-02-18 03:41:55.260192 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:41:55.260197 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:41:55.260203 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:41:55.260208 | orchestrator | 2026-02-18 03:41:55.260214 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 03:41:55.260219 | orchestrator | Wednesday 18 February 2026 03:41:54 +0000 (0:00:00.708) 0:03:33.327 **** 2026-02-18 03:41:55.260225 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.260230 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.260236 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:41:55.260241 | orchestrator | 2026-02-18 03:41:55.260247 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 03:41:55.260253 | orchestrator | Wednesday 18 February 2026 03:41:54 +0000 (0:00:00.627) 0:03:33.955 **** 2026-02-18 03:41:55.260259 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:41:55.260265 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:41:55.260276 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:42:17.635912 | orchestrator | 2026-02-18 03:42:17.636058 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 03:42:17.636083 | orchestrator | Wednesday 18 February 2026 03:41:55 +0000 (0:00:00.340) 0:03:34.296 **** 2026-02-18 03:42:17.636102 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.636119 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.636135 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.636151 | orchestrator | 2026-02-18 03:42:17.636166 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 03:42:17.636181 | orchestrator | Wednesday 18 February 2026 03:41:55 +0000 (0:00:00.720) 0:03:35.016 **** 2026-02-18 03:42:17.636197 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.636213 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.636230 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.636244 | orchestrator | 2026-02-18 03:42:17.636261 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 03:42:17.636276 | orchestrator | Wednesday 18 February 2026 03:41:56 +0000 (0:00:00.722) 0:03:35.739 **** 2026-02-18 03:42:17.636291 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:42:17.636307 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:42:17.636322 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:42:17.636338 | orchestrator | 2026-02-18 03:42:17.636354 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 03:42:17.636370 | orchestrator | Wednesday 18 February 2026 03:41:57 +0000 (0:00:00.628) 0:03:36.367 **** 2026-02-18 03:42:17.636386 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.636422 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.636438 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.636455 | orchestrator | 2026-02-18 03:42:17.636472 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 03:42:17.636490 | orchestrator | Wednesday 18 February 2026 03:41:57 +0000 (0:00:00.368) 0:03:36.736 **** 2026-02-18 03:42:17.636507 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:42:17.636525 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:42:17.636541 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:42:17.636558 | orchestrator | 2026-02-18 03:42:17.636573 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 03:42:17.636590 | orchestrator | Wednesday 18 February 2026 03:41:58 +0000 (0:00:00.348) 0:03:37.085 **** 2026-02-18 03:42:17.636607 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:42:17.636624 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:42:17.636670 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:42:17.636687 | orchestrator | 2026-02-18 03:42:17.636704 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 03:42:17.636722 | orchestrator | Wednesday 18 February 2026 03:41:58 +0000 (0:00:00.318) 0:03:37.404 **** 2026-02-18 03:42:17.636743 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:42:17.636820 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:42:17.636833 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:42:17.636844 | orchestrator | 2026-02-18 03:42:17.636855 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 03:42:17.636865 | orchestrator | Wednesday 18 February 2026 03:41:58 +0000 (0:00:00.613) 0:03:38.017 **** 2026-02-18 03:42:17.636874 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:42:17.636884 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:42:17.636893 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:42:17.636902 | orchestrator | 2026-02-18 03:42:17.636912 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 03:42:17.636922 | orchestrator | Wednesday 18 February 2026 03:41:59 +0000 (0:00:00.339) 0:03:38.357 **** 2026-02-18 03:42:17.636931 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:42:17.636941 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:42:17.636950 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:42:17.636960 | orchestrator | 2026-02-18 03:42:17.636969 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 03:42:17.636979 | orchestrator | Wednesday 18 February 2026 03:41:59 +0000 (0:00:00.328) 0:03:38.685 **** 2026-02-18 03:42:17.636988 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.636998 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.637007 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.637017 | orchestrator | 2026-02-18 03:42:17.637026 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 03:42:17.637036 | orchestrator | Wednesday 18 February 2026 03:42:00 +0000 (0:00:00.373) 0:03:39.059 **** 2026-02-18 03:42:17.637045 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.637054 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.637064 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.637073 | orchestrator | 2026-02-18 03:42:17.637082 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 03:42:17.637092 | orchestrator | Wednesday 18 February 2026 03:42:00 +0000 (0:00:00.648) 0:03:39.707 **** 2026-02-18 03:42:17.637101 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.637110 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.637120 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.637129 | orchestrator | 2026-02-18 03:42:17.637154 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-18 03:42:17.637165 | orchestrator | Wednesday 18 February 2026 03:42:01 +0000 (0:00:00.603) 0:03:40.311 **** 2026-02-18 03:42:17.637181 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.637198 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.637213 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.637229 | orchestrator | 2026-02-18 03:42:17.637247 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-18 03:42:17.637263 | orchestrator | Wednesday 18 February 2026 03:42:01 +0000 (0:00:00.361) 0:03:40.672 **** 2026-02-18 03:42:17.637280 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:42:17.637297 | orchestrator | 2026-02-18 03:42:17.637314 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-18 03:42:17.637331 | orchestrator | Wednesday 18 February 2026 03:42:02 +0000 (0:00:00.928) 0:03:41.601 **** 2026-02-18 03:42:17.637347 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:42:17.637365 | orchestrator | 2026-02-18 03:42:17.637382 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-18 03:42:17.637424 | orchestrator | Wednesday 18 February 2026 03:42:02 +0000 (0:00:00.171) 0:03:41.773 **** 2026-02-18 03:42:17.637444 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 03:42:17.637461 | orchestrator | 2026-02-18 03:42:17.637479 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-18 03:42:17.637489 | orchestrator | Wednesday 18 February 2026 03:42:03 +0000 (0:00:01.068) 0:03:42.842 **** 2026-02-18 03:42:17.637510 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.637520 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.637529 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.637539 | orchestrator | 2026-02-18 03:42:17.637549 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-18 03:42:17.637558 | orchestrator | Wednesday 18 February 2026 03:42:04 +0000 (0:00:00.354) 0:03:43.196 **** 2026-02-18 03:42:17.637568 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.637577 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.637587 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.637596 | orchestrator | 2026-02-18 03:42:17.637605 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-18 03:42:17.637615 | orchestrator | Wednesday 18 February 2026 03:42:04 +0000 (0:00:00.636) 0:03:43.833 **** 2026-02-18 03:42:17.637625 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:42:17.637634 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:42:17.637705 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:42:17.637715 | orchestrator | 2026-02-18 03:42:17.637725 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-18 03:42:17.637735 | orchestrator | Wednesday 18 February 2026 03:42:06 +0000 (0:00:01.275) 0:03:45.108 **** 2026-02-18 03:42:17.637744 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:42:17.637754 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:42:17.637763 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:42:17.637773 | orchestrator | 2026-02-18 03:42:17.637783 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-18 03:42:17.637792 | orchestrator | Wednesday 18 February 2026 03:42:06 +0000 (0:00:00.788) 0:03:45.897 **** 2026-02-18 03:42:17.637802 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:42:17.637812 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:42:17.637821 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:42:17.637831 | orchestrator | 2026-02-18 03:42:17.637840 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-18 03:42:17.637850 | orchestrator | Wednesday 18 February 2026 03:42:07 +0000 (0:00:00.699) 0:03:46.597 **** 2026-02-18 03:42:17.637860 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.637869 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.637879 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.637888 | orchestrator | 2026-02-18 03:42:17.637898 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-18 03:42:17.637908 | orchestrator | Wednesday 18 February 2026 03:42:08 +0000 (0:00:00.998) 0:03:47.596 **** 2026-02-18 03:42:17.637917 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:42:17.637927 | orchestrator | 2026-02-18 03:42:17.637936 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-18 03:42:17.637946 | orchestrator | Wednesday 18 February 2026 03:42:09 +0000 (0:00:01.402) 0:03:48.999 **** 2026-02-18 03:42:17.637956 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.637965 | orchestrator | 2026-02-18 03:42:17.637975 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-18 03:42:17.637984 | orchestrator | Wednesday 18 February 2026 03:42:10 +0000 (0:00:00.802) 0:03:49.801 **** 2026-02-18 03:42:17.637994 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 03:42:17.638004 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:42:17.638013 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:42:17.638077 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:42:17.638088 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-18 03:42:17.638098 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:42:17.638108 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:42:17.638118 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-18 03:42:17.638127 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:42:17.638151 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-18 03:42:17.638169 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-18 03:42:17.638186 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-18 03:42:17.638203 | orchestrator | 2026-02-18 03:42:17.638219 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-18 03:42:17.638237 | orchestrator | Wednesday 18 February 2026 03:42:13 +0000 (0:00:03.169) 0:03:52.971 **** 2026-02-18 03:42:17.638255 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:42:17.638271 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:42:17.638298 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:42:17.638317 | orchestrator | 2026-02-18 03:42:17.638336 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-18 03:42:17.638353 | orchestrator | Wednesday 18 February 2026 03:42:15 +0000 (0:00:01.179) 0:03:54.150 **** 2026-02-18 03:42:17.638370 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.638380 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.638390 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.638399 | orchestrator | 2026-02-18 03:42:17.638409 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-18 03:42:17.638418 | orchestrator | Wednesday 18 February 2026 03:42:15 +0000 (0:00:00.654) 0:03:54.804 **** 2026-02-18 03:42:17.638428 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:42:17.638437 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:42:17.638447 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:42:17.638456 | orchestrator | 2026-02-18 03:42:17.638466 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-18 03:42:17.638475 | orchestrator | Wednesday 18 February 2026 03:42:16 +0000 (0:00:00.340) 0:03:55.144 **** 2026-02-18 03:42:17.638485 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:42:17.638494 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:42:17.638504 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:42:17.638513 | orchestrator | 2026-02-18 03:42:17.638533 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-18 03:43:19.631196 | orchestrator | Wednesday 18 February 2026 03:42:17 +0000 (0:00:01.524) 0:03:56.669 **** 2026-02-18 03:43:19.631310 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:43:19.631325 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:43:19.631335 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:43:19.631345 | orchestrator | 2026-02-18 03:43:19.631356 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-18 03:43:19.631366 | orchestrator | Wednesday 18 February 2026 03:42:18 +0000 (0:00:01.335) 0:03:58.005 **** 2026-02-18 03:43:19.631376 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:19.631385 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:19.631395 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:19.631404 | orchestrator | 2026-02-18 03:43:19.631414 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-18 03:43:19.631424 | orchestrator | Wednesday 18 February 2026 03:42:19 +0000 (0:00:00.588) 0:03:58.594 **** 2026-02-18 03:43:19.631434 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:43:19.631444 | orchestrator | 2026-02-18 03:43:19.631454 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-18 03:43:19.631465 | orchestrator | Wednesday 18 February 2026 03:42:20 +0000 (0:00:00.615) 0:03:59.209 **** 2026-02-18 03:43:19.631474 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:19.631508 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:19.631519 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:19.631529 | orchestrator | 2026-02-18 03:43:19.631538 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-18 03:43:19.631548 | orchestrator | Wednesday 18 February 2026 03:42:20 +0000 (0:00:00.364) 0:03:59.574 **** 2026-02-18 03:43:19.631558 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:19.631591 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:19.631602 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:19.631611 | orchestrator | 2026-02-18 03:43:19.631621 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-18 03:43:19.631631 | orchestrator | Wednesday 18 February 2026 03:42:21 +0000 (0:00:00.580) 0:04:00.155 **** 2026-02-18 03:43:19.631640 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:43:19.631739 | orchestrator | 2026-02-18 03:43:19.631752 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-18 03:43:19.631763 | orchestrator | Wednesday 18 February 2026 03:42:21 +0000 (0:00:00.584) 0:04:00.739 **** 2026-02-18 03:43:19.631775 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:43:19.631786 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:43:19.631797 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:43:19.631808 | orchestrator | 2026-02-18 03:43:19.631819 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-18 03:43:19.631831 | orchestrator | Wednesday 18 February 2026 03:42:23 +0000 (0:00:01.818) 0:04:02.557 **** 2026-02-18 03:43:19.631842 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:43:19.631852 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:43:19.631863 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:43:19.631874 | orchestrator | 2026-02-18 03:43:19.631885 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-18 03:43:19.631895 | orchestrator | Wednesday 18 February 2026 03:42:25 +0000 (0:00:01.511) 0:04:04.069 **** 2026-02-18 03:43:19.631906 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:43:19.631917 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:43:19.631927 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:43:19.631938 | orchestrator | 2026-02-18 03:43:19.631949 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-18 03:43:19.631959 | orchestrator | Wednesday 18 February 2026 03:42:26 +0000 (0:00:01.788) 0:04:05.857 **** 2026-02-18 03:43:19.631970 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:43:19.631981 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:43:19.631991 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:43:19.632002 | orchestrator | 2026-02-18 03:43:19.632013 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-18 03:43:19.632033 | orchestrator | Wednesday 18 February 2026 03:42:28 +0000 (0:00:01.974) 0:04:07.832 **** 2026-02-18 03:43:19.632045 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:43:19.632056 | orchestrator | 2026-02-18 03:43:19.632066 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-18 03:43:19.632077 | orchestrator | Wednesday 18 February 2026 03:42:29 +0000 (0:00:00.856) 0:04:08.688 **** 2026-02-18 03:43:19.632102 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-18 03:43:19.632115 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:19.632125 | orchestrator | 2026-02-18 03:43:19.632135 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-18 03:43:19.632144 | orchestrator | Wednesday 18 February 2026 03:42:51 +0000 (0:00:21.931) 0:04:30.620 **** 2026-02-18 03:43:19.632154 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:19.632164 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:19.632173 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:19.632183 | orchestrator | 2026-02-18 03:43:19.632192 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-18 03:43:19.632202 | orchestrator | Wednesday 18 February 2026 03:43:00 +0000 (0:00:09.054) 0:04:39.674 **** 2026-02-18 03:43:19.632211 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:19.632221 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:19.632230 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:19.632251 | orchestrator | 2026-02-18 03:43:19.632268 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-18 03:43:19.632283 | orchestrator | Wednesday 18 February 2026 03:43:00 +0000 (0:00:00.331) 0:04:40.006 **** 2026-02-18 03:43:19.632326 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a915c0b5a2bdcab5abc0d645e535de8190273730'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-18 03:43:19.632353 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a915c0b5a2bdcab5abc0d645e535de8190273730'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-18 03:43:19.632370 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a915c0b5a2bdcab5abc0d645e535de8190273730'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-18 03:43:19.632387 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a915c0b5a2bdcab5abc0d645e535de8190273730'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-18 03:43:19.632403 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a915c0b5a2bdcab5abc0d645e535de8190273730'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-18 03:43:19.632418 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a915c0b5a2bdcab5abc0d645e535de8190273730'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a915c0b5a2bdcab5abc0d645e535de8190273730'}])  2026-02-18 03:43:19.632433 | orchestrator | 2026-02-18 03:43:19.632447 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-18 03:43:19.632464 | orchestrator | Wednesday 18 February 2026 03:43:15 +0000 (0:00:14.629) 0:04:54.635 **** 2026-02-18 03:43:19.632480 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:19.632523 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:19.632541 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:19.632557 | orchestrator | 2026-02-18 03:43:19.632572 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-18 03:43:19.632589 | orchestrator | Wednesday 18 February 2026 03:43:15 +0000 (0:00:00.377) 0:04:55.012 **** 2026-02-18 03:43:19.632601 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:43:19.632611 | orchestrator | 2026-02-18 03:43:19.632620 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-18 03:43:19.632629 | orchestrator | Wednesday 18 February 2026 03:43:16 +0000 (0:00:00.832) 0:04:55.845 **** 2026-02-18 03:43:19.632639 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:19.632648 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:19.632658 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:19.632668 | orchestrator | 2026-02-18 03:43:19.632688 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-18 03:43:19.632698 | orchestrator | Wednesday 18 February 2026 03:43:17 +0000 (0:00:00.404) 0:04:56.250 **** 2026-02-18 03:43:19.632715 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:19.632791 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:19.632803 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:19.632812 | orchestrator | 2026-02-18 03:43:19.632822 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-18 03:43:19.632832 | orchestrator | Wednesday 18 February 2026 03:43:17 +0000 (0:00:00.372) 0:04:56.622 **** 2026-02-18 03:43:19.632841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 03:43:19.632851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 03:43:19.632861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 03:43:19.632870 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:19.632879 | orchestrator | 2026-02-18 03:43:19.632889 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-18 03:43:19.632899 | orchestrator | Wednesday 18 February 2026 03:43:18 +0000 (0:00:01.017) 0:04:57.640 **** 2026-02-18 03:43:19.632908 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:19.632918 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:19.632927 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:19.632937 | orchestrator | 2026-02-18 03:43:19.632946 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-18 03:43:19.632956 | orchestrator | 2026-02-18 03:43:19.633031 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 03:43:47.509355 | orchestrator | Wednesday 18 February 2026 03:43:19 +0000 (0:00:01.016) 0:04:58.657 **** 2026-02-18 03:43:47.509464 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:43:47.509474 | orchestrator | 2026-02-18 03:43:47.509479 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 03:43:47.509485 | orchestrator | Wednesday 18 February 2026 03:43:20 +0000 (0:00:00.579) 0:04:59.237 **** 2026-02-18 03:43:47.509490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:43:47.509495 | orchestrator | 2026-02-18 03:43:47.509500 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 03:43:47.509505 | orchestrator | Wednesday 18 February 2026 03:43:21 +0000 (0:00:00.815) 0:05:00.052 **** 2026-02-18 03:43:47.509509 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.509515 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.509519 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.509524 | orchestrator | 2026-02-18 03:43:47.509528 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 03:43:47.509533 | orchestrator | Wednesday 18 February 2026 03:43:21 +0000 (0:00:00.785) 0:05:00.837 **** 2026-02-18 03:43:47.509538 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509544 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509548 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509553 | orchestrator | 2026-02-18 03:43:47.509557 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 03:43:47.509562 | orchestrator | Wednesday 18 February 2026 03:43:22 +0000 (0:00:00.344) 0:05:01.182 **** 2026-02-18 03:43:47.509566 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509571 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509575 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509580 | orchestrator | 2026-02-18 03:43:47.509584 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 03:43:47.509589 | orchestrator | Wednesday 18 February 2026 03:43:22 +0000 (0:00:00.582) 0:05:01.765 **** 2026-02-18 03:43:47.509593 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509598 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509618 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509623 | orchestrator | 2026-02-18 03:43:47.509627 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 03:43:47.509632 | orchestrator | Wednesday 18 February 2026 03:43:23 +0000 (0:00:00.341) 0:05:02.106 **** 2026-02-18 03:43:47.509637 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.509641 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.509646 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.509650 | orchestrator | 2026-02-18 03:43:47.509655 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 03:43:47.509659 | orchestrator | Wednesday 18 February 2026 03:43:23 +0000 (0:00:00.751) 0:05:02.858 **** 2026-02-18 03:43:47.509664 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509668 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509673 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509677 | orchestrator | 2026-02-18 03:43:47.509682 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 03:43:47.509686 | orchestrator | Wednesday 18 February 2026 03:43:24 +0000 (0:00:00.340) 0:05:03.198 **** 2026-02-18 03:43:47.509691 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509695 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509700 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509704 | orchestrator | 2026-02-18 03:43:47.509709 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 03:43:47.509713 | orchestrator | Wednesday 18 February 2026 03:43:24 +0000 (0:00:00.606) 0:05:03.805 **** 2026-02-18 03:43:47.509718 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.509722 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.509727 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.509731 | orchestrator | 2026-02-18 03:43:47.509736 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 03:43:47.509740 | orchestrator | Wednesday 18 February 2026 03:43:25 +0000 (0:00:00.744) 0:05:04.550 **** 2026-02-18 03:43:47.509745 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.509749 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.509754 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.509759 | orchestrator | 2026-02-18 03:43:47.509763 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 03:43:47.509768 | orchestrator | Wednesday 18 February 2026 03:43:26 +0000 (0:00:00.767) 0:05:05.317 **** 2026-02-18 03:43:47.509772 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509797 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509802 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509814 | orchestrator | 2026-02-18 03:43:47.509818 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 03:43:47.509823 | orchestrator | Wednesday 18 February 2026 03:43:26 +0000 (0:00:00.346) 0:05:05.664 **** 2026-02-18 03:43:47.509827 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.509832 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.509836 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.509841 | orchestrator | 2026-02-18 03:43:47.509845 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 03:43:47.509851 | orchestrator | Wednesday 18 February 2026 03:43:27 +0000 (0:00:00.641) 0:05:06.306 **** 2026-02-18 03:43:47.509858 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509865 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509875 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509885 | orchestrator | 2026-02-18 03:43:47.509892 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 03:43:47.509899 | orchestrator | Wednesday 18 February 2026 03:43:27 +0000 (0:00:00.356) 0:05:06.662 **** 2026-02-18 03:43:47.509906 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509913 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509920 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509928 | orchestrator | 2026-02-18 03:43:47.509957 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 03:43:47.509966 | orchestrator | Wednesday 18 February 2026 03:43:27 +0000 (0:00:00.365) 0:05:07.027 **** 2026-02-18 03:43:47.509972 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.509980 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.509988 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.509996 | orchestrator | 2026-02-18 03:43:47.510003 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 03:43:47.510011 | orchestrator | Wednesday 18 February 2026 03:43:28 +0000 (0:00:00.361) 0:05:07.389 **** 2026-02-18 03:43:47.510069 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.510077 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.510085 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.510092 | orchestrator | 2026-02-18 03:43:47.510097 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 03:43:47.510103 | orchestrator | Wednesday 18 February 2026 03:43:29 +0000 (0:00:00.867) 0:05:08.256 **** 2026-02-18 03:43:47.510108 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.510113 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.510118 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.510123 | orchestrator | 2026-02-18 03:43:47.510128 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 03:43:47.510133 | orchestrator | Wednesday 18 February 2026 03:43:29 +0000 (0:00:00.379) 0:05:08.635 **** 2026-02-18 03:43:47.510138 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.510143 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.510148 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.510154 | orchestrator | 2026-02-18 03:43:47.510162 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 03:43:47.510169 | orchestrator | Wednesday 18 February 2026 03:43:29 +0000 (0:00:00.375) 0:05:09.011 **** 2026-02-18 03:43:47.510177 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.510184 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.510192 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.510199 | orchestrator | 2026-02-18 03:43:47.510205 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 03:43:47.510212 | orchestrator | Wednesday 18 February 2026 03:43:30 +0000 (0:00:00.395) 0:05:09.407 **** 2026-02-18 03:43:47.510218 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.510226 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.510232 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.510239 | orchestrator | 2026-02-18 03:43:47.510246 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-18 03:43:47.510252 | orchestrator | Wednesday 18 February 2026 03:43:31 +0000 (0:00:00.921) 0:05:10.328 **** 2026-02-18 03:43:47.510260 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 03:43:47.510267 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:43:47.510275 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:43:47.510282 | orchestrator | 2026-02-18 03:43:47.510289 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-18 03:43:47.510296 | orchestrator | Wednesday 18 February 2026 03:43:32 +0000 (0:00:00.762) 0:05:11.090 **** 2026-02-18 03:43:47.510302 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:43:47.510312 | orchestrator | 2026-02-18 03:43:47.510321 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-18 03:43:47.510328 | orchestrator | Wednesday 18 February 2026 03:43:32 +0000 (0:00:00.523) 0:05:11.614 **** 2026-02-18 03:43:47.510335 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:43:47.510342 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:43:47.510348 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:43:47.510355 | orchestrator | 2026-02-18 03:43:47.510361 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-18 03:43:47.510375 | orchestrator | Wednesday 18 February 2026 03:43:33 +0000 (0:00:01.040) 0:05:12.655 **** 2026-02-18 03:43:47.510382 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:43:47.510389 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:43:47.510396 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:43:47.510402 | orchestrator | 2026-02-18 03:43:47.510408 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-18 03:43:47.510416 | orchestrator | Wednesday 18 February 2026 03:43:33 +0000 (0:00:00.313) 0:05:12.969 **** 2026-02-18 03:43:47.510471 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 03:43:47.510480 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 03:43:47.510486 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 03:43:47.510493 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-18 03:43:47.510499 | orchestrator | 2026-02-18 03:43:47.510513 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-18 03:43:47.510520 | orchestrator | Wednesday 18 February 2026 03:43:44 +0000 (0:00:10.593) 0:05:23.562 **** 2026-02-18 03:43:47.510526 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:43:47.510533 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:43:47.510540 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:43:47.510546 | orchestrator | 2026-02-18 03:43:47.510552 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-18 03:43:47.510559 | orchestrator | Wednesday 18 February 2026 03:43:44 +0000 (0:00:00.367) 0:05:23.930 **** 2026-02-18 03:43:47.510565 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-18 03:43:47.510572 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-18 03:43:47.510578 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-18 03:43:47.510585 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-18 03:43:47.510592 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:43:47.510598 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:43:47.510605 | orchestrator | 2026-02-18 03:43:47.510611 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-18 03:43:47.510628 | orchestrator | Wednesday 18 February 2026 03:43:47 +0000 (0:00:02.603) 0:05:26.534 **** 2026-02-18 03:44:44.529842 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-18 03:44:44.529930 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-18 03:44:44.529938 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-18 03:44:44.529945 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 03:44:44.529951 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-18 03:44:44.529957 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-18 03:44:44.529963 | orchestrator | 2026-02-18 03:44:44.529970 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-18 03:44:44.529977 | orchestrator | Wednesday 18 February 2026 03:43:48 +0000 (0:00:01.235) 0:05:27.769 **** 2026-02-18 03:44:44.529983 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:44:44.529989 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:44:44.529994 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:44:44.530000 | orchestrator | 2026-02-18 03:44:44.530006 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-18 03:44:44.530053 | orchestrator | Wednesday 18 February 2026 03:43:49 +0000 (0:00:00.766) 0:05:28.536 **** 2026-02-18 03:44:44.530060 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:44:44.530066 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:44:44.530072 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:44:44.530078 | orchestrator | 2026-02-18 03:44:44.530084 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-18 03:44:44.530090 | orchestrator | Wednesday 18 February 2026 03:43:49 +0000 (0:00:00.327) 0:05:28.863 **** 2026-02-18 03:44:44.530114 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:44:44.530120 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:44:44.530126 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:44:44.530131 | orchestrator | 2026-02-18 03:44:44.530137 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-18 03:44:44.530143 | orchestrator | Wednesday 18 February 2026 03:43:50 +0000 (0:00:00.643) 0:05:29.507 **** 2026-02-18 03:44:44.530149 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:44:44.530155 | orchestrator | 2026-02-18 03:44:44.530161 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-18 03:44:44.530166 | orchestrator | Wednesday 18 February 2026 03:43:51 +0000 (0:00:00.602) 0:05:30.110 **** 2026-02-18 03:44:44.530172 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:44:44.530178 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:44:44.530183 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:44:44.530189 | orchestrator | 2026-02-18 03:44:44.530195 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-18 03:44:44.530200 | orchestrator | Wednesday 18 February 2026 03:43:51 +0000 (0:00:00.329) 0:05:30.439 **** 2026-02-18 03:44:44.530206 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:44:44.530212 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:44:44.530217 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:44:44.530223 | orchestrator | 2026-02-18 03:44:44.530229 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-18 03:44:44.530234 | orchestrator | Wednesday 18 February 2026 03:43:52 +0000 (0:00:00.630) 0:05:31.070 **** 2026-02-18 03:44:44.530240 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:44:44.530247 | orchestrator | 2026-02-18 03:44:44.530252 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-18 03:44:44.530258 | orchestrator | Wednesday 18 February 2026 03:43:52 +0000 (0:00:00.605) 0:05:31.675 **** 2026-02-18 03:44:44.530264 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:44:44.530269 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:44:44.530275 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:44:44.530281 | orchestrator | 2026-02-18 03:44:44.530337 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-18 03:44:44.530344 | orchestrator | Wednesday 18 February 2026 03:43:53 +0000 (0:00:01.287) 0:05:32.963 **** 2026-02-18 03:44:44.530350 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:44:44.530364 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:44:44.530370 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:44:44.530375 | orchestrator | 2026-02-18 03:44:44.530381 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-18 03:44:44.530387 | orchestrator | Wednesday 18 February 2026 03:43:55 +0000 (0:00:01.477) 0:05:34.441 **** 2026-02-18 03:44:44.530392 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:44:44.530398 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:44:44.530404 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:44:44.530411 | orchestrator | 2026-02-18 03:44:44.530417 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-18 03:44:44.530447 | orchestrator | Wednesday 18 February 2026 03:43:57 +0000 (0:00:01.776) 0:05:36.217 **** 2026-02-18 03:44:44.530457 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:44:44.530466 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:44:44.530475 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:44:44.530484 | orchestrator | 2026-02-18 03:44:44.530493 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-18 03:44:44.530502 | orchestrator | Wednesday 18 February 2026 03:43:59 +0000 (0:00:02.012) 0:05:38.229 **** 2026-02-18 03:44:44.530511 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:44:44.530520 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:44:44.530529 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-18 03:44:44.530549 | orchestrator | 2026-02-18 03:44:44.530559 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-18 03:44:44.530568 | orchestrator | Wednesday 18 February 2026 03:43:59 +0000 (0:00:00.710) 0:05:38.939 **** 2026-02-18 03:44:44.530578 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-18 03:44:44.530587 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-18 03:44:44.530614 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-18 03:44:44.530625 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-18 03:44:44.530636 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:44:44.530647 | orchestrator | 2026-02-18 03:44:44.530659 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-18 03:44:44.530666 | orchestrator | Wednesday 18 February 2026 03:44:24 +0000 (0:00:24.431) 0:06:03.371 **** 2026-02-18 03:44:44.530673 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:44:44.530679 | orchestrator | 2026-02-18 03:44:44.530686 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-18 03:44:44.530692 | orchestrator | Wednesday 18 February 2026 03:44:25 +0000 (0:00:01.297) 0:06:04.668 **** 2026-02-18 03:44:44.530699 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:44:44.530706 | orchestrator | 2026-02-18 03:44:44.530712 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-18 03:44:44.530718 | orchestrator | Wednesday 18 February 2026 03:44:25 +0000 (0:00:00.317) 0:06:04.986 **** 2026-02-18 03:44:44.530725 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:44:44.530731 | orchestrator | 2026-02-18 03:44:44.530738 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-18 03:44:44.530744 | orchestrator | Wednesday 18 February 2026 03:44:26 +0000 (0:00:00.150) 0:06:05.136 **** 2026-02-18 03:44:44.530751 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-18 03:44:44.530757 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-18 03:44:44.530764 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-18 03:44:44.530771 | orchestrator | 2026-02-18 03:44:44.530777 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-18 03:44:44.530784 | orchestrator | Wednesday 18 February 2026 03:44:33 +0000 (0:00:07.172) 0:06:12.308 **** 2026-02-18 03:44:44.530791 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-18 03:44:44.530797 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-18 03:44:44.530804 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-18 03:44:44.530811 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-18 03:44:44.530817 | orchestrator | 2026-02-18 03:44:44.530824 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-18 03:44:44.530831 | orchestrator | Wednesday 18 February 2026 03:44:38 +0000 (0:00:05.172) 0:06:17.481 **** 2026-02-18 03:44:44.530837 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:44:44.530843 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:44:44.530848 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:44:44.530854 | orchestrator | 2026-02-18 03:44:44.530860 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-18 03:44:44.530866 | orchestrator | Wednesday 18 February 2026 03:44:39 +0000 (0:00:00.681) 0:06:18.163 **** 2026-02-18 03:44:44.530871 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:44:44.530877 | orchestrator | 2026-02-18 03:44:44.530888 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-18 03:44:44.530894 | orchestrator | Wednesday 18 February 2026 03:44:39 +0000 (0:00:00.609) 0:06:18.773 **** 2026-02-18 03:44:44.530900 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:44:44.530906 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:44:44.530912 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:44:44.530919 | orchestrator | 2026-02-18 03:44:44.530928 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-18 03:44:44.530937 | orchestrator | Wednesday 18 February 2026 03:44:40 +0000 (0:00:00.628) 0:06:19.402 **** 2026-02-18 03:44:44.530946 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:44:44.530955 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:44:44.530965 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:44:44.530974 | orchestrator | 2026-02-18 03:44:44.530983 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-18 03:44:44.530992 | orchestrator | Wednesday 18 February 2026 03:44:41 +0000 (0:00:01.158) 0:06:20.560 **** 2026-02-18 03:44:44.531001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 03:44:44.531011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 03:44:44.531028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 03:44:44.531038 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:44:44.531048 | orchestrator | 2026-02-18 03:44:44.531056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-18 03:44:44.531062 | orchestrator | Wednesday 18 February 2026 03:44:42 +0000 (0:00:00.641) 0:06:21.202 **** 2026-02-18 03:44:44.531067 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:44:44.531073 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:44:44.531079 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:44:44.531084 | orchestrator | 2026-02-18 03:44:44.531090 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-18 03:44:44.531099 | orchestrator | 2026-02-18 03:44:44.531109 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 03:44:44.531118 | orchestrator | Wednesday 18 February 2026 03:44:42 +0000 (0:00:00.595) 0:06:21.797 **** 2026-02-18 03:44:44.531127 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:44:44.531137 | orchestrator | 2026-02-18 03:44:44.531147 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 03:44:44.531156 | orchestrator | Wednesday 18 February 2026 03:44:43 +0000 (0:00:00.888) 0:06:22.685 **** 2026-02-18 03:44:44.531174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:45:01.872493 | orchestrator | 2026-02-18 03:45:01.872591 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 03:45:01.872602 | orchestrator | Wednesday 18 February 2026 03:44:44 +0000 (0:00:00.874) 0:06:23.559 **** 2026-02-18 03:45:01.872609 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.872616 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.872623 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.872629 | orchestrator | 2026-02-18 03:45:01.872636 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 03:45:01.872642 | orchestrator | Wednesday 18 February 2026 03:44:44 +0000 (0:00:00.364) 0:06:23.924 **** 2026-02-18 03:45:01.872648 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.872655 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.872664 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.872675 | orchestrator | 2026-02-18 03:45:01.872686 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 03:45:01.872696 | orchestrator | Wednesday 18 February 2026 03:44:45 +0000 (0:00:00.681) 0:06:24.605 **** 2026-02-18 03:45:01.872706 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.872716 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.872749 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.872759 | orchestrator | 2026-02-18 03:45:01.872770 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 03:45:01.872780 | orchestrator | Wednesday 18 February 2026 03:44:46 +0000 (0:00:00.720) 0:06:25.325 **** 2026-02-18 03:45:01.872790 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.872802 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.872812 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.872824 | orchestrator | 2026-02-18 03:45:01.872836 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 03:45:01.872847 | orchestrator | Wednesday 18 February 2026 03:44:47 +0000 (0:00:00.984) 0:06:26.310 **** 2026-02-18 03:45:01.872857 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.872868 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.872878 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.872889 | orchestrator | 2026-02-18 03:45:01.872899 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 03:45:01.872910 | orchestrator | Wednesday 18 February 2026 03:44:47 +0000 (0:00:00.367) 0:06:26.677 **** 2026-02-18 03:45:01.872916 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.872923 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.872929 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.872935 | orchestrator | 2026-02-18 03:45:01.872941 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 03:45:01.872948 | orchestrator | Wednesday 18 February 2026 03:44:47 +0000 (0:00:00.335) 0:06:27.012 **** 2026-02-18 03:45:01.872954 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.872960 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.872966 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.872972 | orchestrator | 2026-02-18 03:45:01.872978 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 03:45:01.872984 | orchestrator | Wednesday 18 February 2026 03:44:48 +0000 (0:00:00.346) 0:06:27.359 **** 2026-02-18 03:45:01.872991 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.872997 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873003 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873009 | orchestrator | 2026-02-18 03:45:01.873015 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 03:45:01.873021 | orchestrator | Wednesday 18 February 2026 03:44:49 +0000 (0:00:01.054) 0:06:28.413 **** 2026-02-18 03:45:01.873027 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873033 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873039 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873045 | orchestrator | 2026-02-18 03:45:01.873053 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 03:45:01.873060 | orchestrator | Wednesday 18 February 2026 03:44:50 +0000 (0:00:00.723) 0:06:29.136 **** 2026-02-18 03:45:01.873068 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.873075 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.873082 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.873089 | orchestrator | 2026-02-18 03:45:01.873096 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 03:45:01.873104 | orchestrator | Wednesday 18 February 2026 03:44:50 +0000 (0:00:00.345) 0:06:29.482 **** 2026-02-18 03:45:01.873111 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.873118 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.873125 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.873131 | orchestrator | 2026-02-18 03:45:01.873137 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 03:45:01.873157 | orchestrator | Wednesday 18 February 2026 03:44:50 +0000 (0:00:00.330) 0:06:29.813 **** 2026-02-18 03:45:01.873163 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873169 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873175 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873181 | orchestrator | 2026-02-18 03:45:01.873194 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 03:45:01.873200 | orchestrator | Wednesday 18 February 2026 03:44:51 +0000 (0:00:00.646) 0:06:30.460 **** 2026-02-18 03:45:01.873206 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873212 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873218 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873224 | orchestrator | 2026-02-18 03:45:01.873231 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 03:45:01.873251 | orchestrator | Wednesday 18 February 2026 03:44:51 +0000 (0:00:00.399) 0:06:30.860 **** 2026-02-18 03:45:01.873257 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873283 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873290 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873296 | orchestrator | 2026-02-18 03:45:01.873303 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 03:45:01.873309 | orchestrator | Wednesday 18 February 2026 03:44:52 +0000 (0:00:00.375) 0:06:31.235 **** 2026-02-18 03:45:01.873315 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.873321 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.873328 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.873334 | orchestrator | 2026-02-18 03:45:01.873340 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 03:45:01.873362 | orchestrator | Wednesday 18 February 2026 03:44:52 +0000 (0:00:00.318) 0:06:31.553 **** 2026-02-18 03:45:01.873369 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.873375 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.873381 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.873387 | orchestrator | 2026-02-18 03:45:01.873394 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 03:45:01.873400 | orchestrator | Wednesday 18 February 2026 03:44:53 +0000 (0:00:00.623) 0:06:32.177 **** 2026-02-18 03:45:01.873406 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.873412 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.873418 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.873424 | orchestrator | 2026-02-18 03:45:01.873430 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 03:45:01.873436 | orchestrator | Wednesday 18 February 2026 03:44:53 +0000 (0:00:00.337) 0:06:32.514 **** 2026-02-18 03:45:01.873442 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873449 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873455 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873461 | orchestrator | 2026-02-18 03:45:01.873467 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 03:45:01.873473 | orchestrator | Wednesday 18 February 2026 03:44:53 +0000 (0:00:00.354) 0:06:32.869 **** 2026-02-18 03:45:01.873479 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873485 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873491 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873497 | orchestrator | 2026-02-18 03:45:01.873503 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-18 03:45:01.873510 | orchestrator | Wednesday 18 February 2026 03:44:54 +0000 (0:00:00.888) 0:06:33.758 **** 2026-02-18 03:45:01.873516 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873522 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873528 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873534 | orchestrator | 2026-02-18 03:45:01.873540 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-18 03:45:01.873546 | orchestrator | Wednesday 18 February 2026 03:44:55 +0000 (0:00:00.358) 0:06:34.116 **** 2026-02-18 03:45:01.873553 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:45:01.873560 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:45:01.873566 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:45:01.873577 | orchestrator | 2026-02-18 03:45:01.873583 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-18 03:45:01.873589 | orchestrator | Wednesday 18 February 2026 03:44:55 +0000 (0:00:00.742) 0:06:34.858 **** 2026-02-18 03:45:01.873595 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:45:01.873602 | orchestrator | 2026-02-18 03:45:01.873608 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-18 03:45:01.873614 | orchestrator | Wednesday 18 February 2026 03:44:56 +0000 (0:00:00.562) 0:06:35.421 **** 2026-02-18 03:45:01.873620 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.873626 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.873632 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.873638 | orchestrator | 2026-02-18 03:45:01.873645 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-18 03:45:01.873651 | orchestrator | Wednesday 18 February 2026 03:44:57 +0000 (0:00:00.660) 0:06:36.082 **** 2026-02-18 03:45:01.873657 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:45:01.873663 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:45:01.873669 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:45:01.873675 | orchestrator | 2026-02-18 03:45:01.873681 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-18 03:45:01.873687 | orchestrator | Wednesday 18 February 2026 03:44:57 +0000 (0:00:00.335) 0:06:36.417 **** 2026-02-18 03:45:01.873693 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873699 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873706 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873712 | orchestrator | 2026-02-18 03:45:01.873718 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-18 03:45:01.873724 | orchestrator | Wednesday 18 February 2026 03:44:58 +0000 (0:00:00.662) 0:06:37.080 **** 2026-02-18 03:45:01.873730 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:45:01.873736 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:45:01.873743 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:45:01.873754 | orchestrator | 2026-02-18 03:45:01.873769 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-18 03:45:01.873779 | orchestrator | Wednesday 18 February 2026 03:44:58 +0000 (0:00:00.656) 0:06:37.737 **** 2026-02-18 03:45:01.873790 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-18 03:45:01.873801 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-18 03:45:01.873813 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-18 03:45:01.873823 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-18 03:45:01.873834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-18 03:45:01.873845 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-18 03:45:01.873851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-18 03:45:01.873858 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-18 03:45:01.873864 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-18 03:45:01.873875 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-18 03:46:11.918601 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-18 03:46:11.918709 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-18 03:46:11.918724 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-18 03:46:11.918737 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-18 03:46:11.918772 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-18 03:46:11.918785 | orchestrator | 2026-02-18 03:46:11.918796 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-18 03:46:11.918807 | orchestrator | Wednesday 18 February 2026 03:45:01 +0000 (0:00:03.163) 0:06:40.901 **** 2026-02-18 03:46:11.918818 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:11.918830 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:11.918841 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:11.918852 | orchestrator | 2026-02-18 03:46:11.918863 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-18 03:46:11.918874 | orchestrator | Wednesday 18 February 2026 03:45:02 +0000 (0:00:00.370) 0:06:41.272 **** 2026-02-18 03:46:11.918884 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:46:11.918896 | orchestrator | 2026-02-18 03:46:11.918907 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-18 03:46:11.918917 | orchestrator | Wednesday 18 February 2026 03:45:03 +0000 (0:00:00.840) 0:06:42.112 **** 2026-02-18 03:46:11.918928 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-18 03:46:11.918939 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-18 03:46:11.918950 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-18 03:46:11.918961 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-18 03:46:11.918972 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-18 03:46:11.918983 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-18 03:46:11.918993 | orchestrator | 2026-02-18 03:46:11.919004 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-18 03:46:11.919015 | orchestrator | Wednesday 18 February 2026 03:45:04 +0000 (0:00:00.999) 0:06:43.111 **** 2026-02-18 03:46:11.919025 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:46:11.919036 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 03:46:11.919047 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 03:46:11.919057 | orchestrator | 2026-02-18 03:46:11.919068 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-18 03:46:11.919079 | orchestrator | Wednesday 18 February 2026 03:45:06 +0000 (0:00:02.104) 0:06:45.216 **** 2026-02-18 03:46:11.919097 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-18 03:46:11.919116 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 03:46:11.919242 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:46:11.919266 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-18 03:46:11.919284 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 03:46:11.919305 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:46:11.919326 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-18 03:46:11.919346 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 03:46:11.919362 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:46:11.919375 | orchestrator | 2026-02-18 03:46:11.919387 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-18 03:46:11.919401 | orchestrator | Wednesday 18 February 2026 03:45:07 +0000 (0:00:01.149) 0:06:46.366 **** 2026-02-18 03:46:11.919413 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:46:11.919427 | orchestrator | 2026-02-18 03:46:11.919445 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-18 03:46:11.919471 | orchestrator | Wednesday 18 February 2026 03:45:09 +0000 (0:00:02.069) 0:06:48.435 **** 2026-02-18 03:46:11.919513 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:46:11.919532 | orchestrator | 2026-02-18 03:46:11.919566 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-18 03:46:11.919582 | orchestrator | Wednesday 18 February 2026 03:45:10 +0000 (0:00:00.944) 0:06:49.380 **** 2026-02-18 03:46:11.919599 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}) 2026-02-18 03:46:11.919619 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}) 2026-02-18 03:46:11.919635 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}) 2026-02-18 03:46:11.919653 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}) 2026-02-18 03:46:11.919672 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}) 2026-02-18 03:46:11.919714 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}) 2026-02-18 03:46:11.919735 | orchestrator | 2026-02-18 03:46:11.919753 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-18 03:46:11.919771 | orchestrator | Wednesday 18 February 2026 03:45:52 +0000 (0:00:42.550) 0:07:31.931 **** 2026-02-18 03:46:11.919783 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:11.919801 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:11.919819 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:11.919838 | orchestrator | 2026-02-18 03:46:11.919855 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-18 03:46:11.919875 | orchestrator | Wednesday 18 February 2026 03:45:53 +0000 (0:00:00.350) 0:07:32.282 **** 2026-02-18 03:46:11.919893 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:46:11.919911 | orchestrator | 2026-02-18 03:46:11.919929 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-18 03:46:11.919949 | orchestrator | Wednesday 18 February 2026 03:45:54 +0000 (0:00:00.835) 0:07:33.118 **** 2026-02-18 03:46:11.919969 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:46:11.919988 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:46:11.920006 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:46:11.920025 | orchestrator | 2026-02-18 03:46:11.920044 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-18 03:46:11.920063 | orchestrator | Wednesday 18 February 2026 03:45:54 +0000 (0:00:00.690) 0:07:33.809 **** 2026-02-18 03:46:11.920075 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:46:11.920086 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:46:11.920097 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:46:11.920108 | orchestrator | 2026-02-18 03:46:11.920119 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-18 03:46:11.920160 | orchestrator | Wednesday 18 February 2026 03:45:57 +0000 (0:00:02.661) 0:07:36.470 **** 2026-02-18 03:46:11.920180 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:46:11.920200 | orchestrator | 2026-02-18 03:46:11.920218 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-18 03:46:11.920236 | orchestrator | Wednesday 18 February 2026 03:45:58 +0000 (0:00:00.865) 0:07:37.335 **** 2026-02-18 03:46:11.920253 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:46:11.920264 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:46:11.920274 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:46:11.920285 | orchestrator | 2026-02-18 03:46:11.920295 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-18 03:46:11.920306 | orchestrator | Wednesday 18 February 2026 03:45:59 +0000 (0:00:01.225) 0:07:38.561 **** 2026-02-18 03:46:11.920329 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:46:11.920340 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:46:11.920351 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:46:11.920361 | orchestrator | 2026-02-18 03:46:11.920372 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-18 03:46:11.920383 | orchestrator | Wednesday 18 February 2026 03:46:00 +0000 (0:00:01.160) 0:07:39.721 **** 2026-02-18 03:46:11.920393 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:46:11.920404 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:46:11.920415 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:46:11.920425 | orchestrator | 2026-02-18 03:46:11.920436 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-18 03:46:11.920447 | orchestrator | Wednesday 18 February 2026 03:46:03 +0000 (0:00:03.010) 0:07:42.731 **** 2026-02-18 03:46:11.920457 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:11.920468 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:11.920479 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:11.920490 | orchestrator | 2026-02-18 03:46:11.920500 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-18 03:46:11.920511 | orchestrator | Wednesday 18 February 2026 03:46:04 +0000 (0:00:00.357) 0:07:43.089 **** 2026-02-18 03:46:11.920522 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:11.920533 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:11.920543 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:11.920554 | orchestrator | 2026-02-18 03:46:11.920564 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-18 03:46:11.920575 | orchestrator | Wednesday 18 February 2026 03:46:04 +0000 (0:00:00.406) 0:07:43.495 **** 2026-02-18 03:46:11.920586 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 03:46:11.920605 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-18 03:46:11.920616 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-18 03:46:11.920626 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-18 03:46:11.920638 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-18 03:46:11.920657 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-18 03:46:11.920674 | orchestrator | 2026-02-18 03:46:11.920692 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-18 03:46:11.920711 | orchestrator | Wednesday 18 February 2026 03:46:05 +0000 (0:00:01.047) 0:07:44.543 **** 2026-02-18 03:46:11.920727 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-18 03:46:11.920746 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-18 03:46:11.920765 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-18 03:46:11.920784 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-18 03:46:11.920803 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-18 03:46:11.920821 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-18 03:46:11.920836 | orchestrator | 2026-02-18 03:46:11.920847 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-18 03:46:11.920858 | orchestrator | Wednesday 18 February 2026 03:46:08 +0000 (0:00:02.640) 0:07:47.183 **** 2026-02-18 03:46:11.920868 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-18 03:46:11.920879 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-18 03:46:11.920889 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-18 03:46:11.920900 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-18 03:46:11.920922 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-18 03:46:45.086375 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-18 03:46:45.086502 | orchestrator | 2026-02-18 03:46:45.086524 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-18 03:46:45.086543 | orchestrator | Wednesday 18 February 2026 03:46:11 +0000 (0:00:03.767) 0:07:50.950 **** 2026-02-18 03:46:45.086561 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.086578 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.086628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:46:45.086647 | orchestrator | 2026-02-18 03:46:45.086664 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-18 03:46:45.086681 | orchestrator | Wednesday 18 February 2026 03:46:15 +0000 (0:00:03.241) 0:07:54.191 **** 2026-02-18 03:46:45.086698 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.086715 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.086732 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-18 03:46:45.086750 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:46:45.086767 | orchestrator | 2026-02-18 03:46:45.086783 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-18 03:46:45.086798 | orchestrator | Wednesday 18 February 2026 03:46:27 +0000 (0:00:12.650) 0:08:06.842 **** 2026-02-18 03:46:45.086813 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.086830 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.086847 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.086864 | orchestrator | 2026-02-18 03:46:45.086881 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-18 03:46:45.086898 | orchestrator | Wednesday 18 February 2026 03:46:29 +0000 (0:00:01.248) 0:08:08.091 **** 2026-02-18 03:46:45.086915 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.086931 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.086948 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.086964 | orchestrator | 2026-02-18 03:46:45.086980 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-18 03:46:45.086997 | orchestrator | Wednesday 18 February 2026 03:46:29 +0000 (0:00:00.366) 0:08:08.457 **** 2026-02-18 03:46:45.087015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:46:45.087031 | orchestrator | 2026-02-18 03:46:45.087047 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-18 03:46:45.087064 | orchestrator | Wednesday 18 February 2026 03:46:30 +0000 (0:00:00.908) 0:08:09.366 **** 2026-02-18 03:46:45.087112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:46:45.087127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:46:45.087142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:46:45.087157 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087174 | orchestrator | 2026-02-18 03:46:45.087191 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-18 03:46:45.087208 | orchestrator | Wednesday 18 February 2026 03:46:30 +0000 (0:00:00.464) 0:08:09.830 **** 2026-02-18 03:46:45.087224 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087239 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.087255 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.087271 | orchestrator | 2026-02-18 03:46:45.087287 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-18 03:46:45.087304 | orchestrator | Wednesday 18 February 2026 03:46:31 +0000 (0:00:00.349) 0:08:10.179 **** 2026-02-18 03:46:45.087320 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087333 | orchestrator | 2026-02-18 03:46:45.087343 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-18 03:46:45.087353 | orchestrator | Wednesday 18 February 2026 03:46:31 +0000 (0:00:00.244) 0:08:10.424 **** 2026-02-18 03:46:45.087362 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087371 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.087381 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.087391 | orchestrator | 2026-02-18 03:46:45.087400 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-18 03:46:45.087410 | orchestrator | Wednesday 18 February 2026 03:46:31 +0000 (0:00:00.616) 0:08:11.041 **** 2026-02-18 03:46:45.087430 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087440 | orchestrator | 2026-02-18 03:46:45.087464 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-18 03:46:45.087474 | orchestrator | Wednesday 18 February 2026 03:46:32 +0000 (0:00:00.238) 0:08:11.279 **** 2026-02-18 03:46:45.087484 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087493 | orchestrator | 2026-02-18 03:46:45.087503 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-18 03:46:45.087512 | orchestrator | Wednesday 18 February 2026 03:46:32 +0000 (0:00:00.252) 0:08:11.531 **** 2026-02-18 03:46:45.087521 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087531 | orchestrator | 2026-02-18 03:46:45.087540 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-18 03:46:45.087550 | orchestrator | Wednesday 18 February 2026 03:46:32 +0000 (0:00:00.157) 0:08:11.689 **** 2026-02-18 03:46:45.087559 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087568 | orchestrator | 2026-02-18 03:46:45.087578 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-18 03:46:45.087587 | orchestrator | Wednesday 18 February 2026 03:46:32 +0000 (0:00:00.247) 0:08:11.937 **** 2026-02-18 03:46:45.087597 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087606 | orchestrator | 2026-02-18 03:46:45.087616 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-18 03:46:45.087625 | orchestrator | Wednesday 18 February 2026 03:46:33 +0000 (0:00:00.269) 0:08:12.207 **** 2026-02-18 03:46:45.087635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:46:45.087645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:46:45.087673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:46:45.087683 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087693 | orchestrator | 2026-02-18 03:46:45.087702 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-18 03:46:45.087712 | orchestrator | Wednesday 18 February 2026 03:46:33 +0000 (0:00:00.429) 0:08:12.636 **** 2026-02-18 03:46:45.087721 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087731 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.087740 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.087749 | orchestrator | 2026-02-18 03:46:45.087759 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-18 03:46:45.087769 | orchestrator | Wednesday 18 February 2026 03:46:33 +0000 (0:00:00.367) 0:08:13.004 **** 2026-02-18 03:46:45.087778 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087788 | orchestrator | 2026-02-18 03:46:45.087797 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-18 03:46:45.087806 | orchestrator | Wednesday 18 February 2026 03:46:34 +0000 (0:00:00.234) 0:08:13.238 **** 2026-02-18 03:46:45.087816 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.087825 | orchestrator | 2026-02-18 03:46:45.087835 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-18 03:46:45.087844 | orchestrator | 2026-02-18 03:46:45.087854 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 03:46:45.087863 | orchestrator | Wednesday 18 February 2026 03:46:35 +0000 (0:00:01.367) 0:08:14.605 **** 2026-02-18 03:46:45.087873 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:46:45.087884 | orchestrator | 2026-02-18 03:46:45.087893 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 03:46:45.087909 | orchestrator | Wednesday 18 February 2026 03:46:36 +0000 (0:00:01.278) 0:08:15.884 **** 2026-02-18 03:46:45.087925 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:46:45.087949 | orchestrator | 2026-02-18 03:46:45.087965 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 03:46:45.087982 | orchestrator | Wednesday 18 February 2026 03:46:38 +0000 (0:00:01.321) 0:08:17.205 **** 2026-02-18 03:46:45.087997 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.088013 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.088029 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.088046 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:46:45.088063 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:46:45.088117 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:46:45.088127 | orchestrator | 2026-02-18 03:46:45.088137 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 03:46:45.088146 | orchestrator | Wednesday 18 February 2026 03:46:39 +0000 (0:00:01.347) 0:08:18.552 **** 2026-02-18 03:46:45.088156 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:46:45.088166 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:46:45.088175 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:46:45.088185 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:46:45.088194 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:46:45.088204 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:46:45.088213 | orchestrator | 2026-02-18 03:46:45.088223 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 03:46:45.088232 | orchestrator | Wednesday 18 February 2026 03:46:40 +0000 (0:00:00.765) 0:08:19.318 **** 2026-02-18 03:46:45.088242 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:46:45.088251 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:46:45.088261 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:46:45.088270 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:46:45.088280 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:46:45.088289 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:46:45.088299 | orchestrator | 2026-02-18 03:46:45.088308 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 03:46:45.088318 | orchestrator | Wednesday 18 February 2026 03:46:41 +0000 (0:00:01.010) 0:08:20.329 **** 2026-02-18 03:46:45.088327 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:46:45.088337 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:46:45.088347 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:46:45.088356 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:46:45.088366 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:46:45.088375 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:46:45.088385 | orchestrator | 2026-02-18 03:46:45.088401 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 03:46:45.088411 | orchestrator | Wednesday 18 February 2026 03:46:42 +0000 (0:00:00.804) 0:08:21.134 **** 2026-02-18 03:46:45.088421 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.088430 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.088440 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.088449 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:46:45.088458 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:46:45.088468 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:46:45.088477 | orchestrator | 2026-02-18 03:46:45.088487 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 03:46:45.088496 | orchestrator | Wednesday 18 February 2026 03:46:43 +0000 (0:00:01.385) 0:08:22.519 **** 2026-02-18 03:46:45.088506 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.088515 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.088525 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.088534 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:46:45.088544 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:46:45.088553 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:46:45.088563 | orchestrator | 2026-02-18 03:46:45.088572 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 03:46:45.088582 | orchestrator | Wednesday 18 February 2026 03:46:44 +0000 (0:00:00.699) 0:08:23.219 **** 2026-02-18 03:46:45.088591 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:46:45.088609 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:46:45.088618 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:46:45.088628 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:46:45.088646 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:47:18.907759 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:47:18.907866 | orchestrator | 2026-02-18 03:47:18.907880 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 03:47:18.907892 | orchestrator | Wednesday 18 February 2026 03:46:45 +0000 (0:00:00.901) 0:08:24.120 **** 2026-02-18 03:47:18.907901 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.907911 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.907919 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.907928 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.907936 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:18.907945 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:18.907954 | orchestrator | 2026-02-18 03:47:18.907962 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 03:47:18.907971 | orchestrator | Wednesday 18 February 2026 03:46:46 +0000 (0:00:01.095) 0:08:25.215 **** 2026-02-18 03:47:18.907979 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.907988 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.907997 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.908005 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.908065 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:18.908075 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:18.908084 | orchestrator | 2026-02-18 03:47:18.908092 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 03:47:18.908101 | orchestrator | Wednesday 18 February 2026 03:46:47 +0000 (0:00:01.428) 0:08:26.643 **** 2026-02-18 03:47:18.908110 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:18.908119 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:18.908128 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:18.908136 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:47:18.908145 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:47:18.908153 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:47:18.908162 | orchestrator | 2026-02-18 03:47:18.908170 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 03:47:18.908179 | orchestrator | Wednesday 18 February 2026 03:46:48 +0000 (0:00:00.721) 0:08:27.365 **** 2026-02-18 03:47:18.908187 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:18.908196 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:18.908204 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:18.908213 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.908221 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:18.908230 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:18.908239 | orchestrator | 2026-02-18 03:47:18.908247 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 03:47:18.908256 | orchestrator | Wednesday 18 February 2026 03:46:49 +0000 (0:00:00.952) 0:08:28.318 **** 2026-02-18 03:47:18.908264 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.908273 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.908281 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.908290 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:47:18.908299 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:47:18.908309 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:47:18.908319 | orchestrator | 2026-02-18 03:47:18.908329 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 03:47:18.908338 | orchestrator | Wednesday 18 February 2026 03:46:49 +0000 (0:00:00.689) 0:08:29.007 **** 2026-02-18 03:47:18.908348 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.908357 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.908367 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.908377 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:47:18.908387 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:47:18.908419 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:47:18.908430 | orchestrator | 2026-02-18 03:47:18.908439 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 03:47:18.908449 | orchestrator | Wednesday 18 February 2026 03:46:50 +0000 (0:00:00.876) 0:08:29.884 **** 2026-02-18 03:47:18.908459 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.908469 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.908478 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.908488 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:47:18.908498 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:47:18.908508 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:47:18.908518 | orchestrator | 2026-02-18 03:47:18.908528 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 03:47:18.908537 | orchestrator | Wednesday 18 February 2026 03:46:51 +0000 (0:00:00.652) 0:08:30.536 **** 2026-02-18 03:47:18.908547 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:18.908557 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:18.908566 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:18.908576 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:47:18.908586 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:47:18.908596 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:47:18.908606 | orchestrator | 2026-02-18 03:47:18.908615 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 03:47:18.908626 | orchestrator | Wednesday 18 February 2026 03:46:52 +0000 (0:00:00.893) 0:08:31.430 **** 2026-02-18 03:47:18.908636 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:18.908645 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:18.908655 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:18.908665 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:47:18.908674 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:47:18.908684 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:47:18.908694 | orchestrator | 2026-02-18 03:47:18.908703 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 03:47:18.908711 | orchestrator | Wednesday 18 February 2026 03:46:53 +0000 (0:00:00.684) 0:08:32.115 **** 2026-02-18 03:47:18.908720 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:18.908728 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:18.908737 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:18.908745 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.908753 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:18.908762 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:18.908770 | orchestrator | 2026-02-18 03:47:18.908779 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 03:47:18.908787 | orchestrator | Wednesday 18 February 2026 03:46:53 +0000 (0:00:00.908) 0:08:33.024 **** 2026-02-18 03:47:18.908796 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.908804 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.908813 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.908864 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.908889 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:18.908898 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:18.908907 | orchestrator | 2026-02-18 03:47:18.908916 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 03:47:18.908924 | orchestrator | Wednesday 18 February 2026 03:46:54 +0000 (0:00:00.705) 0:08:33.729 **** 2026-02-18 03:47:18.908933 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.908941 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.908949 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.908958 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.908966 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:18.908975 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:18.908984 | orchestrator | 2026-02-18 03:47:18.908999 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-18 03:47:18.909068 | orchestrator | Wednesday 18 February 2026 03:46:56 +0000 (0:00:01.452) 0:08:35.182 **** 2026-02-18 03:47:18.909090 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:47:18.909100 | orchestrator | 2026-02-18 03:47:18.909108 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-18 03:47:18.909117 | orchestrator | Wednesday 18 February 2026 03:47:00 +0000 (0:00:04.137) 0:08:39.320 **** 2026-02-18 03:47:18.909125 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:47:18.909134 | orchestrator | 2026-02-18 03:47:18.909142 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-18 03:47:18.909151 | orchestrator | Wednesday 18 February 2026 03:47:02 +0000 (0:00:02.492) 0:08:41.812 **** 2026-02-18 03:47:18.909160 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:47:18.909168 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:47:18.909177 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:47:18.909185 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.909193 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:47:18.909202 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:47:18.909210 | orchestrator | 2026-02-18 03:47:18.909219 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-18 03:47:18.909227 | orchestrator | Wednesday 18 February 2026 03:47:04 +0000 (0:00:01.859) 0:08:43.672 **** 2026-02-18 03:47:18.909236 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:47:18.909244 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:47:18.909253 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:47:18.909261 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:47:18.909270 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:47:18.909278 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:47:18.909287 | orchestrator | 2026-02-18 03:47:18.909295 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-18 03:47:18.909304 | orchestrator | Wednesday 18 February 2026 03:47:06 +0000 (0:00:01.464) 0:08:45.136 **** 2026-02-18 03:47:18.909313 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:47:18.909323 | orchestrator | 2026-02-18 03:47:18.909332 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-18 03:47:18.909340 | orchestrator | Wednesday 18 February 2026 03:47:07 +0000 (0:00:01.440) 0:08:46.576 **** 2026-02-18 03:47:18.909349 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:47:18.909357 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:47:18.909366 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:47:18.909374 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:47:18.909383 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:47:18.909391 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:47:18.909400 | orchestrator | 2026-02-18 03:47:18.909408 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-18 03:47:18.909417 | orchestrator | Wednesday 18 February 2026 03:47:09 +0000 (0:00:01.722) 0:08:48.299 **** 2026-02-18 03:47:18.909425 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:47:18.909434 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:47:18.909442 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:47:18.909451 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:47:18.909459 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:47:18.909468 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:47:18.909476 | orchestrator | 2026-02-18 03:47:18.909484 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-18 03:47:18.909493 | orchestrator | Wednesday 18 February 2026 03:47:13 +0000 (0:00:03.897) 0:08:52.197 **** 2026-02-18 03:47:18.909508 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:47:18.909517 | orchestrator | 2026-02-18 03:47:18.909525 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-18 03:47:18.909542 | orchestrator | Wednesday 18 February 2026 03:47:14 +0000 (0:00:01.392) 0:08:53.589 **** 2026-02-18 03:47:18.909550 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.909559 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.909567 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.909576 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.909584 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:18.909593 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:18.909601 | orchestrator | 2026-02-18 03:47:18.909610 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-18 03:47:18.909619 | orchestrator | Wednesday 18 February 2026 03:47:15 +0000 (0:00:00.759) 0:08:54.349 **** 2026-02-18 03:47:18.909627 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:47:18.909636 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:47:18.909644 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:47:18.909653 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:47:18.909661 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:47:18.909670 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:47:18.909678 | orchestrator | 2026-02-18 03:47:18.909687 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-18 03:47:18.909695 | orchestrator | Wednesday 18 February 2026 03:47:17 +0000 (0:00:02.596) 0:08:56.945 **** 2026-02-18 03:47:18.909704 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:18.909713 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:18.909721 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:18.909730 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:47:18.909745 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:47:47.496365 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:47:47.496465 | orchestrator | 2026-02-18 03:47:47.496476 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-18 03:47:47.496485 | orchestrator | 2026-02-18 03:47:47.496493 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 03:47:47.496501 | orchestrator | Wednesday 18 February 2026 03:47:18 +0000 (0:00:00.989) 0:08:57.935 **** 2026-02-18 03:47:47.496510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:47:47.496518 | orchestrator | 2026-02-18 03:47:47.496526 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 03:47:47.496533 | orchestrator | Wednesday 18 February 2026 03:47:19 +0000 (0:00:00.902) 0:08:58.837 **** 2026-02-18 03:47:47.496540 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:47:47.496547 | orchestrator | 2026-02-18 03:47:47.496555 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 03:47:47.496562 | orchestrator | Wednesday 18 February 2026 03:47:20 +0000 (0:00:00.541) 0:08:59.378 **** 2026-02-18 03:47:47.496569 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.496577 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.496584 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.496591 | orchestrator | 2026-02-18 03:47:47.496598 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 03:47:47.496605 | orchestrator | Wednesday 18 February 2026 03:47:20 +0000 (0:00:00.651) 0:09:00.030 **** 2026-02-18 03:47:47.496613 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.496620 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.496627 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.496634 | orchestrator | 2026-02-18 03:47:47.496641 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 03:47:47.496648 | orchestrator | Wednesday 18 February 2026 03:47:21 +0000 (0:00:00.787) 0:09:00.817 **** 2026-02-18 03:47:47.496655 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.496662 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.496669 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.496682 | orchestrator | 2026-02-18 03:47:47.496695 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 03:47:47.496731 | orchestrator | Wednesday 18 February 2026 03:47:22 +0000 (0:00:00.728) 0:09:01.546 **** 2026-02-18 03:47:47.496744 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.496757 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.496768 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.496780 | orchestrator | 2026-02-18 03:47:47.496791 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 03:47:47.496804 | orchestrator | Wednesday 18 February 2026 03:47:23 +0000 (0:00:01.023) 0:09:02.569 **** 2026-02-18 03:47:47.496815 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.496829 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.496841 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.496852 | orchestrator | 2026-02-18 03:47:47.496865 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 03:47:47.496877 | orchestrator | Wednesday 18 February 2026 03:47:23 +0000 (0:00:00.363) 0:09:02.933 **** 2026-02-18 03:47:47.496889 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.496904 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.496918 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.496932 | orchestrator | 2026-02-18 03:47:47.496945 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 03:47:47.496954 | orchestrator | Wednesday 18 February 2026 03:47:24 +0000 (0:00:00.357) 0:09:03.291 **** 2026-02-18 03:47:47.497077 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.497090 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.497097 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.497104 | orchestrator | 2026-02-18 03:47:47.497111 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 03:47:47.497119 | orchestrator | Wednesday 18 February 2026 03:47:24 +0000 (0:00:00.336) 0:09:03.627 **** 2026-02-18 03:47:47.497126 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.497134 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.497141 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.497148 | orchestrator | 2026-02-18 03:47:47.497155 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 03:47:47.497176 | orchestrator | Wednesday 18 February 2026 03:47:25 +0000 (0:00:01.004) 0:09:04.632 **** 2026-02-18 03:47:47.497183 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.497190 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.497197 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.497204 | orchestrator | 2026-02-18 03:47:47.497211 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 03:47:47.497218 | orchestrator | Wednesday 18 February 2026 03:47:26 +0000 (0:00:00.797) 0:09:05.430 **** 2026-02-18 03:47:47.497225 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.497232 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.497239 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.497246 | orchestrator | 2026-02-18 03:47:47.497253 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 03:47:47.497260 | orchestrator | Wednesday 18 February 2026 03:47:26 +0000 (0:00:00.335) 0:09:05.766 **** 2026-02-18 03:47:47.497267 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.497274 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.497281 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.497288 | orchestrator | 2026-02-18 03:47:47.497295 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 03:47:47.497302 | orchestrator | Wednesday 18 February 2026 03:47:27 +0000 (0:00:00.345) 0:09:06.111 **** 2026-02-18 03:47:47.497309 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.497316 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.497323 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.497330 | orchestrator | 2026-02-18 03:47:47.497338 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 03:47:47.497345 | orchestrator | Wednesday 18 February 2026 03:47:27 +0000 (0:00:00.689) 0:09:06.800 **** 2026-02-18 03:47:47.497378 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.497386 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.497394 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.497401 | orchestrator | 2026-02-18 03:47:47.497408 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 03:47:47.497415 | orchestrator | Wednesday 18 February 2026 03:47:28 +0000 (0:00:00.372) 0:09:07.172 **** 2026-02-18 03:47:47.497422 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.497430 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.497437 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.497444 | orchestrator | 2026-02-18 03:47:47.497451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 03:47:47.497458 | orchestrator | Wednesday 18 February 2026 03:47:28 +0000 (0:00:00.348) 0:09:07.520 **** 2026-02-18 03:47:47.497465 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.497473 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.497480 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.497487 | orchestrator | 2026-02-18 03:47:47.497494 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 03:47:47.497501 | orchestrator | Wednesday 18 February 2026 03:47:28 +0000 (0:00:00.316) 0:09:07.837 **** 2026-02-18 03:47:47.497508 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.497515 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.497522 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.497529 | orchestrator | 2026-02-18 03:47:47.497536 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 03:47:47.497543 | orchestrator | Wednesday 18 February 2026 03:47:29 +0000 (0:00:00.610) 0:09:08.447 **** 2026-02-18 03:47:47.497550 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.497558 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.497565 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.497572 | orchestrator | 2026-02-18 03:47:47.497579 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 03:47:47.497586 | orchestrator | Wednesday 18 February 2026 03:47:29 +0000 (0:00:00.336) 0:09:08.784 **** 2026-02-18 03:47:47.497594 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.497601 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.497608 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.497615 | orchestrator | 2026-02-18 03:47:47.497622 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 03:47:47.497629 | orchestrator | Wednesday 18 February 2026 03:47:30 +0000 (0:00:00.380) 0:09:09.164 **** 2026-02-18 03:47:47.497636 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:47:47.497643 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:47:47.497650 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:47:47.497657 | orchestrator | 2026-02-18 03:47:47.497665 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-18 03:47:47.497672 | orchestrator | Wednesday 18 February 2026 03:47:30 +0000 (0:00:00.875) 0:09:10.039 **** 2026-02-18 03:47:47.497679 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:47:47.497686 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:47:47.497693 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-18 03:47:47.497701 | orchestrator | 2026-02-18 03:47:47.497708 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-18 03:47:47.497715 | orchestrator | Wednesday 18 February 2026 03:47:31 +0000 (0:00:00.490) 0:09:10.530 **** 2026-02-18 03:47:47.497722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:47:47.497730 | orchestrator | 2026-02-18 03:47:47.497737 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-18 03:47:47.497744 | orchestrator | Wednesday 18 February 2026 03:47:33 +0000 (0:00:02.078) 0:09:12.609 **** 2026-02-18 03:47:47.497752 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-18 03:47:47.497769 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:47:47.497776 | orchestrator | 2026-02-18 03:47:47.497786 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-18 03:47:47.497799 | orchestrator | Wednesday 18 February 2026 03:47:33 +0000 (0:00:00.266) 0:09:12.875 **** 2026-02-18 03:47:47.497820 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-18 03:47:47.497841 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-18 03:47:47.497854 | orchestrator | 2026-02-18 03:47:47.497867 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-18 03:47:47.497878 | orchestrator | Wednesday 18 February 2026 03:47:42 +0000 (0:00:08.409) 0:09:21.285 **** 2026-02-18 03:47:47.497891 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 03:47:47.497903 | orchestrator | 2026-02-18 03:47:47.497916 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-18 03:47:47.497930 | orchestrator | Wednesday 18 February 2026 03:47:45 +0000 (0:00:03.297) 0:09:24.582 **** 2026-02-18 03:47:47.497943 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:47:47.497956 | orchestrator | 2026-02-18 03:47:47.497983 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-18 03:47:47.497991 | orchestrator | Wednesday 18 February 2026 03:47:46 +0000 (0:00:00.867) 0:09:25.450 **** 2026-02-18 03:47:47.498005 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-18 03:48:14.660575 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-18 03:48:14.660681 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-18 03:48:14.660693 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-18 03:48:14.660703 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-18 03:48:14.660712 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-18 03:48:14.660721 | orchestrator | 2026-02-18 03:48:14.660731 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-18 03:48:14.660739 | orchestrator | Wednesday 18 February 2026 03:47:47 +0000 (0:00:01.080) 0:09:26.530 **** 2026-02-18 03:48:14.660747 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:14.660756 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 03:48:14.660764 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 03:48:14.660772 | orchestrator | 2026-02-18 03:48:14.660780 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-18 03:48:14.660787 | orchestrator | Wednesday 18 February 2026 03:47:49 +0000 (0:00:02.233) 0:09:28.764 **** 2026-02-18 03:48:14.660796 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-18 03:48:14.660805 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 03:48:14.660812 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.660820 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-18 03:48:14.660828 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 03:48:14.660836 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.660859 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-18 03:48:14.660869 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 03:48:14.660908 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.660916 | orchestrator | 2026-02-18 03:48:14.660980 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-18 03:48:14.660989 | orchestrator | Wednesday 18 February 2026 03:47:50 +0000 (0:00:01.201) 0:09:29.966 **** 2026-02-18 03:48:14.660997 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.661004 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.661012 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.661019 | orchestrator | 2026-02-18 03:48:14.661027 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-18 03:48:14.661034 | orchestrator | Wednesday 18 February 2026 03:47:53 +0000 (0:00:03.009) 0:09:32.975 **** 2026-02-18 03:48:14.661042 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:14.661050 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:14.661058 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:14.661066 | orchestrator | 2026-02-18 03:48:14.661074 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-18 03:48:14.661082 | orchestrator | Wednesday 18 February 2026 03:47:54 +0000 (0:00:00.356) 0:09:33.331 **** 2026-02-18 03:48:14.661090 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:48:14.661098 | orchestrator | 2026-02-18 03:48:14.661174 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-18 03:48:14.661185 | orchestrator | Wednesday 18 February 2026 03:47:55 +0000 (0:00:00.828) 0:09:34.160 **** 2026-02-18 03:48:14.661193 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:48:14.661203 | orchestrator | 2026-02-18 03:48:14.661210 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-18 03:48:14.661218 | orchestrator | Wednesday 18 February 2026 03:47:55 +0000 (0:00:00.571) 0:09:34.731 **** 2026-02-18 03:48:14.661227 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.661234 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.661240 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.661249 | orchestrator | 2026-02-18 03:48:14.661256 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-18 03:48:14.661279 | orchestrator | Wednesday 18 February 2026 03:47:57 +0000 (0:00:01.330) 0:09:36.062 **** 2026-02-18 03:48:14.661288 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.661297 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.661306 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.661314 | orchestrator | 2026-02-18 03:48:14.661323 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-18 03:48:14.661332 | orchestrator | Wednesday 18 February 2026 03:47:58 +0000 (0:00:01.456) 0:09:37.518 **** 2026-02-18 03:48:14.661341 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.661349 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.661357 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.661366 | orchestrator | 2026-02-18 03:48:14.661375 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-18 03:48:14.661384 | orchestrator | Wednesday 18 February 2026 03:48:00 +0000 (0:00:01.913) 0:09:39.432 **** 2026-02-18 03:48:14.661394 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.661402 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.661410 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.661419 | orchestrator | 2026-02-18 03:48:14.661427 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-18 03:48:14.661436 | orchestrator | Wednesday 18 February 2026 03:48:02 +0000 (0:00:01.986) 0:09:41.418 **** 2026-02-18 03:48:14.661444 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:14.661453 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:14.661461 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:14.661469 | orchestrator | 2026-02-18 03:48:14.661478 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-18 03:48:14.661497 | orchestrator | Wednesday 18 February 2026 03:48:03 +0000 (0:00:01.484) 0:09:42.902 **** 2026-02-18 03:48:14.661506 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.661515 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.661541 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.661550 | orchestrator | 2026-02-18 03:48:14.661557 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-18 03:48:14.661565 | orchestrator | Wednesday 18 February 2026 03:48:04 +0000 (0:00:00.713) 0:09:43.616 **** 2026-02-18 03:48:14.661573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:48:14.661581 | orchestrator | 2026-02-18 03:48:14.661589 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-18 03:48:14.661597 | orchestrator | Wednesday 18 February 2026 03:48:05 +0000 (0:00:00.875) 0:09:44.491 **** 2026-02-18 03:48:14.661605 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:14.661613 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:14.661620 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:14.661628 | orchestrator | 2026-02-18 03:48:14.661637 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-18 03:48:14.661645 | orchestrator | Wednesday 18 February 2026 03:48:05 +0000 (0:00:00.375) 0:09:44.867 **** 2026-02-18 03:48:14.661653 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:14.661660 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:14.661668 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:14.661677 | orchestrator | 2026-02-18 03:48:14.661684 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-18 03:48:14.661692 | orchestrator | Wednesday 18 February 2026 03:48:07 +0000 (0:00:01.234) 0:09:46.102 **** 2026-02-18 03:48:14.661699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:48:14.661706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:48:14.661714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:48:14.661721 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:14.661728 | orchestrator | 2026-02-18 03:48:14.661735 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-18 03:48:14.661742 | orchestrator | Wednesday 18 February 2026 03:48:08 +0000 (0:00:00.975) 0:09:47.077 **** 2026-02-18 03:48:14.661749 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:14.661756 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:14.661764 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:14.661771 | orchestrator | 2026-02-18 03:48:14.661778 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-18 03:48:14.661786 | orchestrator | 2026-02-18 03:48:14.661794 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 03:48:14.661801 | orchestrator | Wednesday 18 February 2026 03:48:08 +0000 (0:00:00.895) 0:09:47.972 **** 2026-02-18 03:48:14.661809 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:48:14.661817 | orchestrator | 2026-02-18 03:48:14.661825 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 03:48:14.661832 | orchestrator | Wednesday 18 February 2026 03:48:09 +0000 (0:00:00.569) 0:09:48.542 **** 2026-02-18 03:48:14.661840 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:48:14.661848 | orchestrator | 2026-02-18 03:48:14.661855 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 03:48:14.661862 | orchestrator | Wednesday 18 February 2026 03:48:10 +0000 (0:00:00.850) 0:09:49.392 **** 2026-02-18 03:48:14.661870 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:14.661877 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:14.661885 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:14.661900 | orchestrator | 2026-02-18 03:48:14.661908 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 03:48:14.661916 | orchestrator | Wednesday 18 February 2026 03:48:10 +0000 (0:00:00.359) 0:09:49.751 **** 2026-02-18 03:48:14.661951 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:14.661960 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:14.661969 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:14.661976 | orchestrator | 2026-02-18 03:48:14.661984 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 03:48:14.661992 | orchestrator | Wednesday 18 February 2026 03:48:11 +0000 (0:00:00.754) 0:09:50.506 **** 2026-02-18 03:48:14.662000 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:14.662059 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:14.662070 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:14.662079 | orchestrator | 2026-02-18 03:48:14.662088 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 03:48:14.662097 | orchestrator | Wednesday 18 February 2026 03:48:12 +0000 (0:00:01.047) 0:09:51.554 **** 2026-02-18 03:48:14.662112 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:14.662121 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:14.662129 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:14.662138 | orchestrator | 2026-02-18 03:48:14.662146 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 03:48:14.662155 | orchestrator | Wednesday 18 February 2026 03:48:13 +0000 (0:00:00.778) 0:09:52.333 **** 2026-02-18 03:48:14.662164 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:14.662173 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:14.662183 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:14.662192 | orchestrator | 2026-02-18 03:48:14.662200 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 03:48:14.662209 | orchestrator | Wednesday 18 February 2026 03:48:13 +0000 (0:00:00.340) 0:09:52.673 **** 2026-02-18 03:48:14.662218 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:14.662226 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:14.662234 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:14.662243 | orchestrator | 2026-02-18 03:48:14.662251 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 03:48:14.662260 | orchestrator | Wednesday 18 February 2026 03:48:13 +0000 (0:00:00.357) 0:09:53.031 **** 2026-02-18 03:48:14.662269 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:14.662277 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:14.662286 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:14.662294 | orchestrator | 2026-02-18 03:48:14.662313 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 03:48:37.030589 | orchestrator | Wednesday 18 February 2026 03:48:14 +0000 (0:00:00.662) 0:09:53.694 **** 2026-02-18 03:48:37.030704 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:37.030721 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:37.030734 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:37.030746 | orchestrator | 2026-02-18 03:48:37.030759 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 03:48:37.030770 | orchestrator | Wednesday 18 February 2026 03:48:15 +0000 (0:00:00.779) 0:09:54.473 **** 2026-02-18 03:48:37.030781 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:37.030793 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:37.030804 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:37.030816 | orchestrator | 2026-02-18 03:48:37.030827 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 03:48:37.030838 | orchestrator | Wednesday 18 February 2026 03:48:16 +0000 (0:00:00.728) 0:09:55.202 **** 2026-02-18 03:48:37.030850 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:37.030862 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:37.030873 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:37.030884 | orchestrator | 2026-02-18 03:48:37.030949 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 03:48:37.030985 | orchestrator | Wednesday 18 February 2026 03:48:16 +0000 (0:00:00.349) 0:09:55.551 **** 2026-02-18 03:48:37.030998 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:37.031009 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:37.031022 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:37.031033 | orchestrator | 2026-02-18 03:48:37.031044 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 03:48:37.031069 | orchestrator | Wednesday 18 February 2026 03:48:17 +0000 (0:00:00.611) 0:09:56.162 **** 2026-02-18 03:48:37.031082 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:37.031093 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:37.031113 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:37.031125 | orchestrator | 2026-02-18 03:48:37.031136 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 03:48:37.031148 | orchestrator | Wednesday 18 February 2026 03:48:17 +0000 (0:00:00.385) 0:09:56.548 **** 2026-02-18 03:48:37.031159 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:37.031171 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:37.031182 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:37.031193 | orchestrator | 2026-02-18 03:48:37.031205 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 03:48:37.031216 | orchestrator | Wednesday 18 February 2026 03:48:17 +0000 (0:00:00.373) 0:09:56.921 **** 2026-02-18 03:48:37.031227 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:37.031239 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:37.031250 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:37.031261 | orchestrator | 2026-02-18 03:48:37.031272 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 03:48:37.031284 | orchestrator | Wednesday 18 February 2026 03:48:18 +0000 (0:00:00.367) 0:09:57.289 **** 2026-02-18 03:48:37.031295 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:37.031306 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:37.031318 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:37.031329 | orchestrator | 2026-02-18 03:48:37.031340 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 03:48:37.031352 | orchestrator | Wednesday 18 February 2026 03:48:18 +0000 (0:00:00.631) 0:09:57.921 **** 2026-02-18 03:48:37.031364 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:37.031375 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:37.031386 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:37.031397 | orchestrator | 2026-02-18 03:48:37.031408 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 03:48:37.031419 | orchestrator | Wednesday 18 February 2026 03:48:19 +0000 (0:00:00.403) 0:09:58.324 **** 2026-02-18 03:48:37.031430 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:37.031442 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:37.031453 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:37.031464 | orchestrator | 2026-02-18 03:48:37.031475 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 03:48:37.031486 | orchestrator | Wednesday 18 February 2026 03:48:19 +0000 (0:00:00.376) 0:09:58.701 **** 2026-02-18 03:48:37.031497 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:37.031508 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:37.031519 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:37.031530 | orchestrator | 2026-02-18 03:48:37.031558 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 03:48:37.031569 | orchestrator | Wednesday 18 February 2026 03:48:20 +0000 (0:00:00.386) 0:09:59.087 **** 2026-02-18 03:48:37.031580 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:48:37.031591 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:48:37.031602 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:48:37.031613 | orchestrator | 2026-02-18 03:48:37.031625 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-18 03:48:37.031636 | orchestrator | Wednesday 18 February 2026 03:48:20 +0000 (0:00:00.918) 0:10:00.005 **** 2026-02-18 03:48:37.031656 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:48:37.031669 | orchestrator | 2026-02-18 03:48:37.031680 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 03:48:37.031691 | orchestrator | Wednesday 18 February 2026 03:48:21 +0000 (0:00:00.621) 0:10:00.627 **** 2026-02-18 03:48:37.031702 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:37.031714 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 03:48:37.031725 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 03:48:37.031736 | orchestrator | 2026-02-18 03:48:37.031747 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 03:48:37.031758 | orchestrator | Wednesday 18 February 2026 03:48:24 +0000 (0:00:02.483) 0:10:03.110 **** 2026-02-18 03:48:37.031770 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-18 03:48:37.031781 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 03:48:37.031792 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:37.031820 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-18 03:48:37.031831 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 03:48:37.031841 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:37.031851 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-18 03:48:37.031863 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 03:48:37.031873 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:37.031885 | orchestrator | 2026-02-18 03:48:37.031914 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-18 03:48:37.031925 | orchestrator | Wednesday 18 February 2026 03:48:25 +0000 (0:00:01.554) 0:10:04.665 **** 2026-02-18 03:48:37.031936 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:48:37.031947 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:48:37.031957 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:48:37.031968 | orchestrator | 2026-02-18 03:48:37.031979 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-18 03:48:37.031989 | orchestrator | Wednesday 18 February 2026 03:48:25 +0000 (0:00:00.347) 0:10:05.012 **** 2026-02-18 03:48:37.032000 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:48:37.032011 | orchestrator | 2026-02-18 03:48:37.032021 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-18 03:48:37.032032 | orchestrator | Wednesday 18 February 2026 03:48:26 +0000 (0:00:00.585) 0:10:05.598 **** 2026-02-18 03:48:37.032045 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 03:48:37.032058 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 03:48:37.032069 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 03:48:37.032080 | orchestrator | 2026-02-18 03:48:37.032092 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-18 03:48:37.032103 | orchestrator | Wednesday 18 February 2026 03:48:27 +0000 (0:00:01.183) 0:10:06.781 **** 2026-02-18 03:48:37.032114 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:37.032125 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-18 03:48:37.032136 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:37.032147 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-18 03:48:37.032167 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:37.032179 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-18 03:48:37.032190 | orchestrator | 2026-02-18 03:48:37.032201 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 03:48:37.032212 | orchestrator | Wednesday 18 February 2026 03:48:32 +0000 (0:00:04.445) 0:10:11.226 **** 2026-02-18 03:48:37.032223 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:37.032234 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 03:48:37.032245 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:37.032256 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 03:48:37.032268 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:48:37.032286 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 03:48:37.032297 | orchestrator | 2026-02-18 03:48:37.032308 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 03:48:37.032319 | orchestrator | Wednesday 18 February 2026 03:48:34 +0000 (0:00:02.416) 0:10:13.643 **** 2026-02-18 03:48:37.032330 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-18 03:48:37.032342 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:48:37.032353 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-18 03:48:37.032365 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:48:37.032375 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-18 03:48:37.032387 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:48:37.032397 | orchestrator | 2026-02-18 03:48:37.032408 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-18 03:48:37.032419 | orchestrator | Wednesday 18 February 2026 03:48:36 +0000 (0:00:01.519) 0:10:15.163 **** 2026-02-18 03:48:37.032430 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-18 03:48:37.032442 | orchestrator | 2026-02-18 03:48:37.032453 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-18 03:48:37.032464 | orchestrator | Wednesday 18 February 2026 03:48:36 +0000 (0:00:00.247) 0:10:15.410 **** 2026-02-18 03:48:37.032475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:48:37.032487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:48:37.032507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.367681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.367808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.367906 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:21.367925 | orchestrator | 2026-02-18 03:49:21.367940 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-18 03:49:21.367956 | orchestrator | Wednesday 18 February 2026 03:48:37 +0000 (0:00:00.652) 0:10:16.063 **** 2026-02-18 03:49:21.367970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.367984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.367997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.368039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.368054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 03:49:21.368067 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:21.368079 | orchestrator | 2026-02-18 03:49:21.368091 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-18 03:49:21.368103 | orchestrator | Wednesday 18 February 2026 03:48:37 +0000 (0:00:00.614) 0:10:16.678 **** 2026-02-18 03:49:21.368116 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 03:49:21.368130 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 03:49:21.368143 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 03:49:21.368156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 03:49:21.368169 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 03:49:21.368182 | orchestrator | 2026-02-18 03:49:21.368195 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-18 03:49:21.368208 | orchestrator | Wednesday 18 February 2026 03:49:08 +0000 (0:00:30.560) 0:10:47.239 **** 2026-02-18 03:49:21.368222 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:21.368236 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:21.368250 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:21.368264 | orchestrator | 2026-02-18 03:49:21.368278 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-18 03:49:21.368292 | orchestrator | Wednesday 18 February 2026 03:49:08 +0000 (0:00:00.363) 0:10:47.602 **** 2026-02-18 03:49:21.368301 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:21.368309 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:21.368316 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:21.368324 | orchestrator | 2026-02-18 03:49:21.368346 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-18 03:49:21.368354 | orchestrator | Wednesday 18 February 2026 03:49:08 +0000 (0:00:00.333) 0:10:47.935 **** 2026-02-18 03:49:21.368362 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:49:21.368370 | orchestrator | 2026-02-18 03:49:21.368378 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-18 03:49:21.368386 | orchestrator | Wednesday 18 February 2026 03:49:09 +0000 (0:00:00.891) 0:10:48.827 **** 2026-02-18 03:49:21.368393 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:49:21.368401 | orchestrator | 2026-02-18 03:49:21.368409 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-18 03:49:21.368417 | orchestrator | Wednesday 18 February 2026 03:49:10 +0000 (0:00:00.566) 0:10:49.393 **** 2026-02-18 03:49:21.368425 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:49:21.368433 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:49:21.368441 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:49:21.368448 | orchestrator | 2026-02-18 03:49:21.368456 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-18 03:49:21.368464 | orchestrator | Wednesday 18 February 2026 03:49:12 +0000 (0:00:01.652) 0:10:51.046 **** 2026-02-18 03:49:21.368481 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:49:21.368489 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:49:21.368496 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:49:21.368504 | orchestrator | 2026-02-18 03:49:21.368512 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-18 03:49:21.368519 | orchestrator | Wednesday 18 February 2026 03:49:13 +0000 (0:00:01.203) 0:10:52.249 **** 2026-02-18 03:49:21.368527 | orchestrator | changed: [testbed-node-3] 2026-02-18 03:49:21.368555 | orchestrator | changed: [testbed-node-5] 2026-02-18 03:49:21.368564 | orchestrator | changed: [testbed-node-4] 2026-02-18 03:49:21.368572 | orchestrator | 2026-02-18 03:49:21.368579 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-18 03:49:21.368587 | orchestrator | Wednesday 18 February 2026 03:49:14 +0000 (0:00:01.791) 0:10:54.041 **** 2026-02-18 03:49:21.368595 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 03:49:21.368603 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 03:49:21.368610 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 03:49:21.368618 | orchestrator | 2026-02-18 03:49:21.368625 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-18 03:49:21.368633 | orchestrator | Wednesday 18 February 2026 03:49:17 +0000 (0:00:02.732) 0:10:56.774 **** 2026-02-18 03:49:21.368641 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:21.368648 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:21.368656 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:21.368663 | orchestrator | 2026-02-18 03:49:21.368671 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-18 03:49:21.368678 | orchestrator | Wednesday 18 February 2026 03:49:18 +0000 (0:00:00.373) 0:10:57.147 **** 2026-02-18 03:49:21.368686 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:49:21.368694 | orchestrator | 2026-02-18 03:49:21.368701 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-18 03:49:21.368709 | orchestrator | Wednesday 18 February 2026 03:49:18 +0000 (0:00:00.889) 0:10:58.036 **** 2026-02-18 03:49:21.368717 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:21.368725 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:21.368733 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:21.368740 | orchestrator | 2026-02-18 03:49:21.368748 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-18 03:49:21.368756 | orchestrator | Wednesday 18 February 2026 03:49:19 +0000 (0:00:00.362) 0:10:58.399 **** 2026-02-18 03:49:21.368763 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:21.368771 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:21.368778 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:21.368786 | orchestrator | 2026-02-18 03:49:21.368794 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-18 03:49:21.368801 | orchestrator | Wednesday 18 February 2026 03:49:19 +0000 (0:00:00.394) 0:10:58.793 **** 2026-02-18 03:49:21.368809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:49:21.368817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:49:21.368852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:49:21.368867 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:21.368875 | orchestrator | 2026-02-18 03:49:21.368883 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-18 03:49:21.368891 | orchestrator | Wednesday 18 February 2026 03:49:20 +0000 (0:00:01.060) 0:10:59.853 **** 2026-02-18 03:49:21.368898 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:21.368906 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:21.368920 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:21.368928 | orchestrator | 2026-02-18 03:49:21.368936 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:49:21.368943 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-18 03:49:21.368959 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-18 03:49:21.368967 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-18 03:49:21.368975 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-18 03:49:21.368982 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-18 03:49:21.368990 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-18 03:49:21.368998 | orchestrator | 2026-02-18 03:49:21.369005 | orchestrator | 2026-02-18 03:49:21.369013 | orchestrator | 2026-02-18 03:49:21.369021 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:49:21.369028 | orchestrator | Wednesday 18 February 2026 03:49:21 +0000 (0:00:00.532) 0:11:00.386 **** 2026-02-18 03:49:21.369036 | orchestrator | =============================================================================== 2026-02-18 03:49:21.369048 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.93s 2026-02-18 03:49:21.369061 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.55s 2026-02-18 03:49:21.369075 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.56s 2026-02-18 03:49:21.369094 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.43s 2026-02-18 03:49:21.369108 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.93s 2026-02-18 03:49:21.369129 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.63s 2026-02-18 03:49:21.853234 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.65s 2026-02-18 03:49:21.853338 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.59s 2026-02-18 03:49:21.853352 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.05s 2026-02-18 03:49:21.853363 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.41s 2026-02-18 03:49:21.853374 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.17s 2026-02-18 03:49:21.853385 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.66s 2026-02-18 03:49:21.853404 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.17s 2026-02-18 03:49:21.853424 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.45s 2026-02-18 03:49:21.853452 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.14s 2026-02-18 03:49:21.853475 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.90s 2026-02-18 03:49:21.853495 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.77s 2026-02-18 03:49:21.853514 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.61s 2026-02-18 03:49:21.853534 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.30s 2026-02-18 03:49:21.853551 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.24s 2026-02-18 03:49:24.386880 | orchestrator | 2026-02-18 03:49:24 | INFO  | Task eba39032-56d8-4b88-ae5c-066dcaa29f23 (ceph-pools) was prepared for execution. 2026-02-18 03:49:24.387003 | orchestrator | 2026-02-18 03:49:24 | INFO  | It takes a moment until task eba39032-56d8-4b88-ae5c-066dcaa29f23 (ceph-pools) has been started and output is visible here. 2026-02-18 03:49:39.271125 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-18 03:49:39.271264 | orchestrator | 2.16.14 2026-02-18 03:49:39.271291 | orchestrator | 2026-02-18 03:49:39.271314 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-18 03:49:39.271335 | orchestrator | 2026-02-18 03:49:39.271355 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 03:49:39.271375 | orchestrator | Wednesday 18 February 2026 03:49:29 +0000 (0:00:00.615) 0:00:00.615 **** 2026-02-18 03:49:39.271396 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:49:39.271418 | orchestrator | 2026-02-18 03:49:39.271437 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 03:49:39.271452 | orchestrator | Wednesday 18 February 2026 03:49:29 +0000 (0:00:00.690) 0:00:01.305 **** 2026-02-18 03:49:39.271463 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.271474 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.271485 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.271495 | orchestrator | 2026-02-18 03:49:39.271506 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 03:49:39.271517 | orchestrator | Wednesday 18 February 2026 03:49:30 +0000 (0:00:00.697) 0:00:02.002 **** 2026-02-18 03:49:39.271527 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.271538 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.271549 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.271560 | orchestrator | 2026-02-18 03:49:39.271571 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 03:49:39.271581 | orchestrator | Wednesday 18 February 2026 03:49:30 +0000 (0:00:00.323) 0:00:02.326 **** 2026-02-18 03:49:39.271592 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.271602 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.271613 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.271623 | orchestrator | 2026-02-18 03:49:39.271651 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 03:49:39.271664 | orchestrator | Wednesday 18 February 2026 03:49:31 +0000 (0:00:00.931) 0:00:03.258 **** 2026-02-18 03:49:39.271676 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.271688 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.271700 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.271713 | orchestrator | 2026-02-18 03:49:39.271725 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 03:49:39.271737 | orchestrator | Wednesday 18 February 2026 03:49:32 +0000 (0:00:00.321) 0:00:03.579 **** 2026-02-18 03:49:39.271749 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.271762 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.271774 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.271786 | orchestrator | 2026-02-18 03:49:39.271828 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 03:49:39.271850 | orchestrator | Wednesday 18 February 2026 03:49:32 +0000 (0:00:00.320) 0:00:03.899 **** 2026-02-18 03:49:39.271870 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.271890 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.271908 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.271924 | orchestrator | 2026-02-18 03:49:39.271937 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 03:49:39.271950 | orchestrator | Wednesday 18 February 2026 03:49:32 +0000 (0:00:00.334) 0:00:04.234 **** 2026-02-18 03:49:39.271962 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:39.271976 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:39.271988 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:39.272000 | orchestrator | 2026-02-18 03:49:39.272010 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 03:49:39.272043 | orchestrator | Wednesday 18 February 2026 03:49:33 +0000 (0:00:00.567) 0:00:04.801 **** 2026-02-18 03:49:39.272055 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.272066 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.272076 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.272087 | orchestrator | 2026-02-18 03:49:39.272098 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 03:49:39.272108 | orchestrator | Wednesday 18 February 2026 03:49:33 +0000 (0:00:00.328) 0:00:05.130 **** 2026-02-18 03:49:39.272119 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:49:39.272130 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:49:39.272140 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:49:39.272151 | orchestrator | 2026-02-18 03:49:39.272161 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 03:49:39.272172 | orchestrator | Wednesday 18 February 2026 03:49:34 +0000 (0:00:00.687) 0:00:05.817 **** 2026-02-18 03:49:39.272183 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:39.272193 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:39.272204 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:39.272215 | orchestrator | 2026-02-18 03:49:39.272225 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 03:49:39.272236 | orchestrator | Wednesday 18 February 2026 03:49:34 +0000 (0:00:00.453) 0:00:06.271 **** 2026-02-18 03:49:39.272246 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:49:39.272257 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:49:39.272273 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:49:39.272291 | orchestrator | 2026-02-18 03:49:39.272309 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 03:49:39.272326 | orchestrator | Wednesday 18 February 2026 03:49:37 +0000 (0:00:02.260) 0:00:08.532 **** 2026-02-18 03:49:39.272344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 03:49:39.272363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 03:49:39.272383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 03:49:39.272402 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:39.272421 | orchestrator | 2026-02-18 03:49:39.272464 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 03:49:39.272484 | orchestrator | Wednesday 18 February 2026 03:49:37 +0000 (0:00:00.687) 0:00:09.219 **** 2026-02-18 03:49:39.272504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 03:49:39.272519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 03:49:39.272530 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 03:49:39.272541 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:39.272552 | orchestrator | 2026-02-18 03:49:39.272563 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 03:49:39.272573 | orchestrator | Wednesday 18 February 2026 03:49:38 +0000 (0:00:01.113) 0:00:10.333 **** 2026-02-18 03:49:39.272594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:39.272622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:39.272634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:39.272645 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:39.272656 | orchestrator | 2026-02-18 03:49:39.272666 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 03:49:39.272677 | orchestrator | Wednesday 18 February 2026 03:49:39 +0000 (0:00:00.169) 0:00:10.503 **** 2026-02-18 03:49:39.272690 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '90866ac7d579', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 03:49:35.735971', 'end': '2026-02-18 03:49:35.776943', 'delta': '0:00:00.040972', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90866ac7d579'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 03:49:39.272705 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4c84206aa4db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 03:49:36.299207', 'end': '2026-02-18 03:49:36.334883', 'delta': '0:00:00.035676', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4c84206aa4db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 03:49:39.272725 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '11fb53bc1513', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 03:49:36.854123', 'end': '2026-02-18 03:49:36.894998', 'delta': '0:00:00.040875', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['11fb53bc1513'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 03:49:46.474070 | orchestrator | 2026-02-18 03:49:46.474187 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 03:49:46.474210 | orchestrator | Wednesday 18 February 2026 03:49:39 +0000 (0:00:00.242) 0:00:10.746 **** 2026-02-18 03:49:46.474260 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:46.474278 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:49:46.474293 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:49:46.474307 | orchestrator | 2026-02-18 03:49:46.474320 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 03:49:46.474334 | orchestrator | Wednesday 18 February 2026 03:49:39 +0000 (0:00:00.482) 0:00:11.228 **** 2026-02-18 03:49:46.474348 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-18 03:49:46.474364 | orchestrator | 2026-02-18 03:49:46.474393 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 03:49:46.474408 | orchestrator | Wednesday 18 February 2026 03:49:41 +0000 (0:00:01.716) 0:00:12.945 **** 2026-02-18 03:49:46.474423 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474437 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474452 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474466 | orchestrator | 2026-02-18 03:49:46.474480 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 03:49:46.474494 | orchestrator | Wednesday 18 February 2026 03:49:41 +0000 (0:00:00.316) 0:00:13.261 **** 2026-02-18 03:49:46.474508 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474523 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474537 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474552 | orchestrator | 2026-02-18 03:49:46.474566 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 03:49:46.474582 | orchestrator | Wednesday 18 February 2026 03:49:42 +0000 (0:00:00.915) 0:00:14.177 **** 2026-02-18 03:49:46.474597 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474612 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474626 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474636 | orchestrator | 2026-02-18 03:49:46.474646 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 03:49:46.474655 | orchestrator | Wednesday 18 February 2026 03:49:43 +0000 (0:00:00.315) 0:00:14.493 **** 2026-02-18 03:49:46.474664 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:49:46.474673 | orchestrator | 2026-02-18 03:49:46.474682 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 03:49:46.474690 | orchestrator | Wednesday 18 February 2026 03:49:43 +0000 (0:00:00.143) 0:00:14.636 **** 2026-02-18 03:49:46.474698 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474705 | orchestrator | 2026-02-18 03:49:46.474713 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 03:49:46.474721 | orchestrator | Wednesday 18 February 2026 03:49:43 +0000 (0:00:00.275) 0:00:14.912 **** 2026-02-18 03:49:46.474729 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474737 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474745 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474752 | orchestrator | 2026-02-18 03:49:46.474760 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 03:49:46.474768 | orchestrator | Wednesday 18 February 2026 03:49:43 +0000 (0:00:00.312) 0:00:15.224 **** 2026-02-18 03:49:46.474776 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474783 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474819 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474828 | orchestrator | 2026-02-18 03:49:46.474836 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 03:49:46.474844 | orchestrator | Wednesday 18 February 2026 03:49:44 +0000 (0:00:00.327) 0:00:15.552 **** 2026-02-18 03:49:46.474852 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474860 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474870 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474882 | orchestrator | 2026-02-18 03:49:46.474893 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 03:49:46.474900 | orchestrator | Wednesday 18 February 2026 03:49:44 +0000 (0:00:00.581) 0:00:16.134 **** 2026-02-18 03:49:46.474918 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474926 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474934 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474942 | orchestrator | 2026-02-18 03:49:46.474950 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 03:49:46.474958 | orchestrator | Wednesday 18 February 2026 03:49:45 +0000 (0:00:00.383) 0:00:16.517 **** 2026-02-18 03:49:46.474966 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.474973 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.474981 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.474989 | orchestrator | 2026-02-18 03:49:46.474997 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 03:49:46.475005 | orchestrator | Wednesday 18 February 2026 03:49:45 +0000 (0:00:00.328) 0:00:16.845 **** 2026-02-18 03:49:46.475012 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.475020 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.475028 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.475036 | orchestrator | 2026-02-18 03:49:46.475043 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 03:49:46.475052 | orchestrator | Wednesday 18 February 2026 03:49:45 +0000 (0:00:00.546) 0:00:17.392 **** 2026-02-18 03:49:46.475060 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.475067 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.475075 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.475083 | orchestrator | 2026-02-18 03:49:46.475091 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 03:49:46.475098 | orchestrator | Wednesday 18 February 2026 03:49:46 +0000 (0:00:00.326) 0:00:17.718 **** 2026-02-18 03:49:46.475127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.475227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.548333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.548487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.548511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.548543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.548564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.548576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.548597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.548611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.548623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.548634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.548646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.548663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.744333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.744436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.744459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.744503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.744547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.744576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.744594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.744620 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:46.744639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.744650 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:46.744659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.744674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.744694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.744727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.969141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.969229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.969276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.969286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.969294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.969302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-18 03:49:46.969336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.969356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.969366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.969376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.969385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-18 03:49:46.969395 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:49:46.969404 | orchestrator | 2026-02-18 03:49:46.969413 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 03:49:46.969423 | orchestrator | Wednesday 18 February 2026 03:49:46 +0000 (0:00:00.635) 0:00:18.354 **** 2026-02-18 03:49:46.969438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.100995 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.101006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.101018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.101029 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.101061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.170988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171217 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171238 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.171263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.374883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375022 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375040 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:49:47.375070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375121 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:49:47.375132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375148 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375159 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.375179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545049 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545170 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545234 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:49:47.545246 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:50:00.036692 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-18-02-27-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-18 03:50:00.036862 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.036880 | orchestrator | 2026-02-18 03:50:00.036889 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 03:50:00.036897 | orchestrator | Wednesday 18 February 2026 03:49:47 +0000 (0:00:00.672) 0:00:19.026 **** 2026-02-18 03:50:00.036904 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:50:00.036912 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:50:00.036918 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:50:00.036924 | orchestrator | 2026-02-18 03:50:00.036931 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 03:50:00.036937 | orchestrator | Wednesday 18 February 2026 03:49:48 +0000 (0:00:00.947) 0:00:19.974 **** 2026-02-18 03:50:00.036944 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:50:00.036951 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:50:00.036957 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:50:00.036964 | orchestrator | 2026-02-18 03:50:00.036970 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 03:50:00.036976 | orchestrator | Wednesday 18 February 2026 03:49:48 +0000 (0:00:00.340) 0:00:20.315 **** 2026-02-18 03:50:00.036982 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:50:00.036989 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:50:00.036995 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:50:00.037001 | orchestrator | 2026-02-18 03:50:00.037021 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 03:50:00.037027 | orchestrator | Wednesday 18 February 2026 03:49:49 +0000 (0:00:00.672) 0:00:20.987 **** 2026-02-18 03:50:00.037033 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037038 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:50:00.037043 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.037049 | orchestrator | 2026-02-18 03:50:00.037054 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 03:50:00.037059 | orchestrator | Wednesday 18 February 2026 03:49:49 +0000 (0:00:00.341) 0:00:21.328 **** 2026-02-18 03:50:00.037065 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037071 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:50:00.037078 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.037084 | orchestrator | 2026-02-18 03:50:00.037090 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 03:50:00.037096 | orchestrator | Wednesday 18 February 2026 03:49:50 +0000 (0:00:00.822) 0:00:22.151 **** 2026-02-18 03:50:00.037102 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037108 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:50:00.037114 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.037120 | orchestrator | 2026-02-18 03:50:00.037126 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 03:50:00.037132 | orchestrator | Wednesday 18 February 2026 03:49:51 +0000 (0:00:00.345) 0:00:22.497 **** 2026-02-18 03:50:00.037138 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-18 03:50:00.037144 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-18 03:50:00.037151 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-18 03:50:00.037157 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-18 03:50:00.037162 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-18 03:50:00.037169 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-18 03:50:00.037175 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-18 03:50:00.037189 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-18 03:50:00.037195 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-18 03:50:00.037200 | orchestrator | 2026-02-18 03:50:00.037206 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 03:50:00.037221 | orchestrator | Wednesday 18 February 2026 03:49:52 +0000 (0:00:01.136) 0:00:23.633 **** 2026-02-18 03:50:00.037229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 03:50:00.037235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 03:50:00.037241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 03:50:00.037248 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 03:50:00.037260 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 03:50:00.037266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 03:50:00.037272 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:50:00.037278 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 03:50:00.037284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 03:50:00.037289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 03:50:00.037296 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.037302 | orchestrator | 2026-02-18 03:50:00.037309 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 03:50:00.037317 | orchestrator | Wednesday 18 February 2026 03:49:52 +0000 (0:00:00.383) 0:00:24.017 **** 2026-02-18 03:50:00.037342 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 03:50:00.037350 | orchestrator | 2026-02-18 03:50:00.037357 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 03:50:00.037365 | orchestrator | Wednesday 18 February 2026 03:49:53 +0000 (0:00:00.771) 0:00:24.788 **** 2026-02-18 03:50:00.037372 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037378 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:50:00.037384 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.037389 | orchestrator | 2026-02-18 03:50:00.037395 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 03:50:00.037402 | orchestrator | Wednesday 18 February 2026 03:49:53 +0000 (0:00:00.400) 0:00:25.188 **** 2026-02-18 03:50:00.037407 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037412 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:50:00.037418 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.037424 | orchestrator | 2026-02-18 03:50:00.037430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 03:50:00.037436 | orchestrator | Wednesday 18 February 2026 03:49:54 +0000 (0:00:00.314) 0:00:25.503 **** 2026-02-18 03:50:00.037442 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037448 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:50:00.037454 | orchestrator | skipping: [testbed-node-5] 2026-02-18 03:50:00.037460 | orchestrator | 2026-02-18 03:50:00.037466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 03:50:00.037472 | orchestrator | Wednesday 18 February 2026 03:49:54 +0000 (0:00:00.565) 0:00:26.069 **** 2026-02-18 03:50:00.037478 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:50:00.037484 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:50:00.037490 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:50:00.037497 | orchestrator | 2026-02-18 03:50:00.037503 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 03:50:00.037509 | orchestrator | Wednesday 18 February 2026 03:49:55 +0000 (0:00:00.452) 0:00:26.521 **** 2026-02-18 03:50:00.037515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:50:00.037531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:50:00.037545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:50:00.037550 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037556 | orchestrator | 2026-02-18 03:50:00.037562 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 03:50:00.037568 | orchestrator | Wednesday 18 February 2026 03:49:55 +0000 (0:00:00.400) 0:00:26.922 **** 2026-02-18 03:50:00.037573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:50:00.037580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:50:00.037586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:50:00.037592 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037598 | orchestrator | 2026-02-18 03:50:00.037603 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 03:50:00.037608 | orchestrator | Wednesday 18 February 2026 03:49:55 +0000 (0:00:00.387) 0:00:27.309 **** 2026-02-18 03:50:00.037613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 03:50:00.037619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 03:50:00.037626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 03:50:00.037632 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:50:00.037638 | orchestrator | 2026-02-18 03:50:00.037644 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 03:50:00.037650 | orchestrator | Wednesday 18 February 2026 03:49:56 +0000 (0:00:00.380) 0:00:27.690 **** 2026-02-18 03:50:00.037655 | orchestrator | ok: [testbed-node-3] 2026-02-18 03:50:00.037661 | orchestrator | ok: [testbed-node-4] 2026-02-18 03:50:00.037666 | orchestrator | ok: [testbed-node-5] 2026-02-18 03:50:00.037672 | orchestrator | 2026-02-18 03:50:00.037677 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 03:50:00.037684 | orchestrator | Wednesday 18 February 2026 03:49:56 +0000 (0:00:00.360) 0:00:28.050 **** 2026-02-18 03:50:00.037692 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 03:50:00.037697 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 03:50:00.037702 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 03:50:00.037708 | orchestrator | 2026-02-18 03:50:00.037714 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 03:50:00.037720 | orchestrator | Wednesday 18 February 2026 03:49:57 +0000 (0:00:00.830) 0:00:28.881 **** 2026-02-18 03:50:00.037726 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:50:00.037733 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:50:00.037738 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:50:00.037744 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 03:50:00.037749 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 03:50:00.037754 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 03:50:00.037760 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 03:50:00.037765 | orchestrator | 2026-02-18 03:50:00.037820 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 03:50:00.037828 | orchestrator | Wednesday 18 February 2026 03:49:58 +0000 (0:00:00.890) 0:00:29.771 **** 2026-02-18 03:50:00.037835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 03:50:00.037854 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 03:51:39.374252 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 03:51:39.374370 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 03:51:39.374406 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 03:51:39.374416 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 03:51:39.374436 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 03:51:39.374487 | orchestrator | 2026-02-18 03:51:39.374498 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-18 03:51:39.374509 | orchestrator | Wednesday 18 February 2026 03:50:00 +0000 (0:00:01.736) 0:00:31.507 **** 2026-02-18 03:51:39.374518 | orchestrator | skipping: [testbed-node-3] 2026-02-18 03:51:39.374528 | orchestrator | skipping: [testbed-node-4] 2026-02-18 03:51:39.374536 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-18 03:51:39.374544 | orchestrator | 2026-02-18 03:51:39.374553 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-18 03:51:39.374561 | orchestrator | Wednesday 18 February 2026 03:50:00 +0000 (0:00:00.412) 0:00:31.920 **** 2026-02-18 03:51:39.374571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-18 03:51:39.374582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-18 03:51:39.374603 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-18 03:51:39.374611 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-18 03:51:39.374619 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-18 03:51:39.374627 | orchestrator | 2026-02-18 03:51:39.374636 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-18 03:51:39.374644 | orchestrator | Wednesday 18 February 2026 03:50:45 +0000 (0:00:45.420) 0:01:17.340 **** 2026-02-18 03:51:39.374705 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374714 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374719 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374723 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374733 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374738 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-18 03:51:39.374742 | orchestrator | 2026-02-18 03:51:39.374747 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-18 03:51:39.374752 | orchestrator | Wednesday 18 February 2026 03:51:09 +0000 (0:00:23.905) 0:01:41.246 **** 2026-02-18 03:51:39.374757 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374768 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374773 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374783 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374787 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374792 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 03:51:39.374797 | orchestrator | 2026-02-18 03:51:39.374802 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-18 03:51:39.374807 | orchestrator | Wednesday 18 February 2026 03:51:21 +0000 (0:00:11.737) 0:01:52.984 **** 2026-02-18 03:51:39.374813 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374830 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:51:39.374837 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:51:39.374842 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374848 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:51:39.374854 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:51:39.374859 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374865 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:51:39.374870 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:51:39.374876 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374881 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:51:39.374886 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:51:39.374892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374897 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:51:39.374903 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:51:39.374911 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 03:51:39.374919 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 03:51:39.374927 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 03:51:39.374935 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-18 03:51:39.374943 | orchestrator | 2026-02-18 03:51:39.374951 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:51:39.374975 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-18 03:51:39.374986 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-18 03:51:39.374994 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-18 03:51:39.374999 | orchestrator | 2026-02-18 03:51:39.375004 | orchestrator | 2026-02-18 03:51:39.375009 | orchestrator | 2026-02-18 03:51:39.375013 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:51:39.375018 | orchestrator | Wednesday 18 February 2026 03:51:38 +0000 (0:00:17.487) 0:02:10.471 **** 2026-02-18 03:51:39.375023 | orchestrator | =============================================================================== 2026-02-18 03:51:39.375033 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.42s 2026-02-18 03:51:39.375037 | orchestrator | generate keys ---------------------------------------------------------- 23.91s 2026-02-18 03:51:39.375042 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.49s 2026-02-18 03:51:39.375047 | orchestrator | get keys from monitors ------------------------------------------------- 11.74s 2026-02-18 03:51:39.375051 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.26s 2026-02-18 03:51:39.375056 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.74s 2026-02-18 03:51:39.375061 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.72s 2026-02-18 03:51:39.375068 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.14s 2026-02-18 03:51:39.375076 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.11s 2026-02-18 03:51:39.375083 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.95s 2026-02-18 03:51:39.375091 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.93s 2026-02-18 03:51:39.375100 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.92s 2026-02-18 03:51:39.375107 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.89s 2026-02-18 03:51:39.375114 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.83s 2026-02-18 03:51:39.375122 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.82s 2026-02-18 03:51:39.375129 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.77s 2026-02-18 03:51:39.375137 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.70s 2026-02-18 03:51:39.375144 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.69s 2026-02-18 03:51:39.375151 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2026-02-18 03:51:39.375159 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-02-18 03:51:41.861419 | orchestrator | 2026-02-18 03:51:41 | INFO  | Task d4b1119a-bf31-46d4-b00a-1d5183f92cfa (copy-ceph-keys) was prepared for execution. 2026-02-18 03:51:41.861631 | orchestrator | 2026-02-18 03:51:41 | INFO  | It takes a moment until task d4b1119a-bf31-46d4-b00a-1d5183f92cfa (copy-ceph-keys) has been started and output is visible here. 2026-02-18 03:52:21.917692 | orchestrator | 2026-02-18 03:52:21.917832 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-18 03:52:21.917852 | orchestrator | 2026-02-18 03:52:21.917862 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-18 03:52:21.917872 | orchestrator | Wednesday 18 February 2026 03:51:46 +0000 (0:00:00.170) 0:00:00.170 **** 2026-02-18 03:52:21.917883 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-18 03:52:21.917895 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.917905 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.917916 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-18 03:52:21.917926 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.917937 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-18 03:52:21.917948 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-18 03:52:21.917958 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-18 03:52:21.917998 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-18 03:52:21.918008 | orchestrator | 2026-02-18 03:52:21.918080 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-18 03:52:21.918091 | orchestrator | Wednesday 18 February 2026 03:51:51 +0000 (0:00:04.779) 0:00:04.950 **** 2026-02-18 03:52:21.918100 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-18 03:52:21.918126 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918137 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-18 03:52:21.918148 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918154 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-18 03:52:21.918159 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-18 03:52:21.918164 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-18 03:52:21.918170 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-18 03:52:21.918175 | orchestrator | 2026-02-18 03:52:21.918180 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-18 03:52:21.918186 | orchestrator | Wednesday 18 February 2026 03:51:55 +0000 (0:00:04.469) 0:00:09.420 **** 2026-02-18 03:52:21.918196 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-18 03:52:21.918206 | orchestrator | 2026-02-18 03:52:21.918215 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-18 03:52:21.918224 | orchestrator | Wednesday 18 February 2026 03:51:56 +0000 (0:00:00.995) 0:00:10.415 **** 2026-02-18 03:52:21.918233 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-18 03:52:21.918243 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918253 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918263 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-18 03:52:21.918272 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918282 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-18 03:52:21.918292 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-18 03:52:21.918302 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-18 03:52:21.918311 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-18 03:52:21.918320 | orchestrator | 2026-02-18 03:52:21.918330 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-18 03:52:21.918339 | orchestrator | Wednesday 18 February 2026 03:52:10 +0000 (0:00:14.070) 0:00:24.485 **** 2026-02-18 03:52:21.918348 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-18 03:52:21.918358 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-18 03:52:21.918367 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-18 03:52:21.918374 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-18 03:52:21.918400 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-18 03:52:21.918415 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-18 03:52:21.918421 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-18 03:52:21.918436 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-18 03:52:21.918443 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-18 03:52:21.918449 | orchestrator | 2026-02-18 03:52:21.918455 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-18 03:52:21.918461 | orchestrator | Wednesday 18 February 2026 03:52:14 +0000 (0:00:03.393) 0:00:27.879 **** 2026-02-18 03:52:21.918468 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-18 03:52:21.918475 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918481 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918488 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-18 03:52:21.918494 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-18 03:52:21.918500 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-18 03:52:21.918506 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-18 03:52:21.918511 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-18 03:52:21.918516 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-18 03:52:21.918522 | orchestrator | 2026-02-18 03:52:21.918528 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:52:21.918538 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:52:21.918546 | orchestrator | 2026-02-18 03:52:21.918551 | orchestrator | 2026-02-18 03:52:21.918557 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:52:21.918562 | orchestrator | Wednesday 18 February 2026 03:52:21 +0000 (0:00:07.523) 0:00:35.403 **** 2026-02-18 03:52:21.918567 | orchestrator | =============================================================================== 2026-02-18 03:52:21.918572 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.07s 2026-02-18 03:52:21.918578 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.52s 2026-02-18 03:52:21.918583 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.78s 2026-02-18 03:52:21.918589 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.47s 2026-02-18 03:52:21.918594 | orchestrator | Check if target directories exist --------------------------------------- 3.39s 2026-02-18 03:52:21.918599 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-02-18 03:52:34.339734 | orchestrator | 2026-02-18 03:52:34 | INFO  | Task 41c64737-3717-4c07-87db-af541878a8fc (cephclient) was prepared for execution. 2026-02-18 03:52:34.339849 | orchestrator | 2026-02-18 03:52:34 | INFO  | It takes a moment until task 41c64737-3717-4c07-87db-af541878a8fc (cephclient) has been started and output is visible here. 2026-02-18 03:53:37.177073 | orchestrator | 2026-02-18 03:53:37.177188 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-18 03:53:37.177202 | orchestrator | 2026-02-18 03:53:37.177210 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-18 03:53:37.177218 | orchestrator | Wednesday 18 February 2026 03:52:38 +0000 (0:00:00.258) 0:00:00.258 **** 2026-02-18 03:53:37.177226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-18 03:53:37.177256 | orchestrator | 2026-02-18 03:53:37.177264 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-18 03:53:37.177272 | orchestrator | Wednesday 18 February 2026 03:52:39 +0000 (0:00:00.255) 0:00:00.513 **** 2026-02-18 03:53:37.177280 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-18 03:53:37.177288 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-18 03:53:37.177296 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-18 03:53:37.177303 | orchestrator | 2026-02-18 03:53:37.177310 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-18 03:53:37.177318 | orchestrator | Wednesday 18 February 2026 03:52:40 +0000 (0:00:01.335) 0:00:01.849 **** 2026-02-18 03:53:37.177326 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-18 03:53:37.177334 | orchestrator | 2026-02-18 03:53:37.177342 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-18 03:53:37.177349 | orchestrator | Wednesday 18 February 2026 03:52:42 +0000 (0:00:01.547) 0:00:03.396 **** 2026-02-18 03:53:37.177357 | orchestrator | changed: [testbed-manager] 2026-02-18 03:53:37.177364 | orchestrator | 2026-02-18 03:53:37.177371 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-18 03:53:37.177378 | orchestrator | Wednesday 18 February 2026 03:52:42 +0000 (0:00:00.948) 0:00:04.345 **** 2026-02-18 03:53:37.177385 | orchestrator | changed: [testbed-manager] 2026-02-18 03:53:37.177392 | orchestrator | 2026-02-18 03:53:37.177399 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-18 03:53:37.177407 | orchestrator | Wednesday 18 February 2026 03:52:43 +0000 (0:00:01.007) 0:00:05.353 **** 2026-02-18 03:53:37.177414 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-18 03:53:37.177422 | orchestrator | ok: [testbed-manager] 2026-02-18 03:53:37.177429 | orchestrator | 2026-02-18 03:53:37.177437 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-18 03:53:37.177445 | orchestrator | Wednesday 18 February 2026 03:53:26 +0000 (0:00:42.650) 0:00:48.004 **** 2026-02-18 03:53:37.177453 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-18 03:53:37.177461 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-18 03:53:37.177469 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-18 03:53:37.177477 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-18 03:53:37.177484 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-18 03:53:37.177492 | orchestrator | 2026-02-18 03:53:37.177500 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-18 03:53:37.177508 | orchestrator | Wednesday 18 February 2026 03:53:30 +0000 (0:00:04.340) 0:00:52.344 **** 2026-02-18 03:53:37.177516 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-18 03:53:37.177545 | orchestrator | 2026-02-18 03:53:37.177553 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-18 03:53:37.177560 | orchestrator | Wednesday 18 February 2026 03:53:31 +0000 (0:00:00.496) 0:00:52.841 **** 2026-02-18 03:53:37.177568 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:53:37.177575 | orchestrator | 2026-02-18 03:53:37.177581 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-18 03:53:37.177588 | orchestrator | Wednesday 18 February 2026 03:53:31 +0000 (0:00:00.150) 0:00:52.991 **** 2026-02-18 03:53:37.177595 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:53:37.177602 | orchestrator | 2026-02-18 03:53:37.177610 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-18 03:53:37.177617 | orchestrator | Wednesday 18 February 2026 03:53:32 +0000 (0:00:00.551) 0:00:53.542 **** 2026-02-18 03:53:37.177641 | orchestrator | changed: [testbed-manager] 2026-02-18 03:53:37.177650 | orchestrator | 2026-02-18 03:53:37.177658 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-18 03:53:37.177678 | orchestrator | Wednesday 18 February 2026 03:53:33 +0000 (0:00:01.492) 0:00:55.035 **** 2026-02-18 03:53:37.177686 | orchestrator | changed: [testbed-manager] 2026-02-18 03:53:37.177693 | orchestrator | 2026-02-18 03:53:37.177700 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-18 03:53:37.177707 | orchestrator | Wednesday 18 February 2026 03:53:34 +0000 (0:00:00.793) 0:00:55.828 **** 2026-02-18 03:53:37.177715 | orchestrator | changed: [testbed-manager] 2026-02-18 03:53:37.177722 | orchestrator | 2026-02-18 03:53:37.177729 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-18 03:53:37.177736 | orchestrator | Wednesday 18 February 2026 03:53:35 +0000 (0:00:00.719) 0:00:56.548 **** 2026-02-18 03:53:37.177743 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-18 03:53:37.177751 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-18 03:53:37.177759 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-18 03:53:37.177767 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-18 03:53:37.177775 | orchestrator | 2026-02-18 03:53:37.177783 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:53:37.177791 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 03:53:37.177799 | orchestrator | 2026-02-18 03:53:37.177807 | orchestrator | 2026-02-18 03:53:37.177831 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:53:37.177838 | orchestrator | Wednesday 18 February 2026 03:53:36 +0000 (0:00:01.613) 0:00:58.161 **** 2026-02-18 03:53:37.177846 | orchestrator | =============================================================================== 2026-02-18 03:53:37.177854 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.65s 2026-02-18 03:53:37.177862 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.34s 2026-02-18 03:53:37.177868 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.61s 2026-02-18 03:53:37.177873 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.55s 2026-02-18 03:53:37.177878 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.49s 2026-02-18 03:53:37.177883 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.34s 2026-02-18 03:53:37.177888 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.01s 2026-02-18 03:53:37.177893 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-02-18 03:53:37.177897 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2026-02-18 03:53:37.177901 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.72s 2026-02-18 03:53:37.177906 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.55s 2026-02-18 03:53:37.177910 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-02-18 03:53:37.177914 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-02-18 03:53:37.177918 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-02-18 03:53:39.700621 | orchestrator | 2026-02-18 03:53:39 | INFO  | Task baeb9377-bb58-49b3-9546-70660ab1e8e6 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-18 03:53:39.700716 | orchestrator | 2026-02-18 03:53:39 | INFO  | It takes a moment until task baeb9377-bb58-49b3-9546-70660ab1e8e6 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-18 03:55:01.366281 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-18 03:55:01.366398 | orchestrator | 2.16.14 2026-02-18 03:55:01.366410 | orchestrator | 2026-02-18 03:55:01.366418 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-18 03:55:01.366427 | orchestrator | 2026-02-18 03:55:01.366434 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-18 03:55:01.366502 | orchestrator | Wednesday 18 February 2026 03:53:44 +0000 (0:00:00.296) 0:00:00.296 **** 2026-02-18 03:55:01.366511 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366520 | orchestrator | 2026-02-18 03:55:01.366527 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-18 03:55:01.366534 | orchestrator | Wednesday 18 February 2026 03:53:46 +0000 (0:00:01.928) 0:00:02.225 **** 2026-02-18 03:55:01.366542 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366549 | orchestrator | 2026-02-18 03:55:01.366556 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-18 03:55:01.366563 | orchestrator | Wednesday 18 February 2026 03:53:47 +0000 (0:00:01.076) 0:00:03.302 **** 2026-02-18 03:55:01.366571 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366578 | orchestrator | 2026-02-18 03:55:01.366585 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-18 03:55:01.366592 | orchestrator | Wednesday 18 February 2026 03:53:48 +0000 (0:00:01.141) 0:00:04.443 **** 2026-02-18 03:55:01.366599 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366607 | orchestrator | 2026-02-18 03:55:01.366614 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-18 03:55:01.366625 | orchestrator | Wednesday 18 February 2026 03:53:49 +0000 (0:00:01.208) 0:00:05.652 **** 2026-02-18 03:55:01.366638 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366650 | orchestrator | 2026-02-18 03:55:01.366663 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-18 03:55:01.366690 | orchestrator | Wednesday 18 February 2026 03:53:50 +0000 (0:00:01.226) 0:00:06.879 **** 2026-02-18 03:55:01.366704 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366717 | orchestrator | 2026-02-18 03:55:01.366728 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-18 03:55:01.366736 | orchestrator | Wednesday 18 February 2026 03:53:52 +0000 (0:00:01.144) 0:00:08.023 **** 2026-02-18 03:55:01.366743 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366750 | orchestrator | 2026-02-18 03:55:01.366758 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-18 03:55:01.366765 | orchestrator | Wednesday 18 February 2026 03:53:54 +0000 (0:00:01.997) 0:00:10.020 **** 2026-02-18 03:55:01.366772 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366779 | orchestrator | 2026-02-18 03:55:01.366786 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-18 03:55:01.366793 | orchestrator | Wednesday 18 February 2026 03:53:55 +0000 (0:00:01.206) 0:00:11.226 **** 2026-02-18 03:55:01.366800 | orchestrator | changed: [testbed-manager] 2026-02-18 03:55:01.366807 | orchestrator | 2026-02-18 03:55:01.366815 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-18 03:55:01.366822 | orchestrator | Wednesday 18 February 2026 03:54:36 +0000 (0:00:41.079) 0:00:52.305 **** 2026-02-18 03:55:01.366829 | orchestrator | skipping: [testbed-manager] 2026-02-18 03:55:01.366836 | orchestrator | 2026-02-18 03:55:01.366845 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-18 03:55:01.366853 | orchestrator | 2026-02-18 03:55:01.366861 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-18 03:55:01.366869 | orchestrator | Wednesday 18 February 2026 03:54:36 +0000 (0:00:00.147) 0:00:52.453 **** 2026-02-18 03:55:01.366878 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:55:01.366886 | orchestrator | 2026-02-18 03:55:01.366895 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-18 03:55:01.366903 | orchestrator | 2026-02-18 03:55:01.366911 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-18 03:55:01.366919 | orchestrator | Wednesday 18 February 2026 03:54:48 +0000 (0:00:11.870) 0:01:04.324 **** 2026-02-18 03:55:01.366928 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:55:01.366936 | orchestrator | 2026-02-18 03:55:01.366944 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-18 03:55:01.366959 | orchestrator | 2026-02-18 03:55:01.366968 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-18 03:55:01.366976 | orchestrator | Wednesday 18 February 2026 03:54:59 +0000 (0:00:11.262) 0:01:15.587 **** 2026-02-18 03:55:01.366986 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:55:01.366994 | orchestrator | 2026-02-18 03:55:01.367002 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:55:01.367011 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 03:55:01.367021 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:55:01.367029 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:55:01.367038 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 03:55:01.367046 | orchestrator | 2026-02-18 03:55:01.367054 | orchestrator | 2026-02-18 03:55:01.367063 | orchestrator | 2026-02-18 03:55:01.367071 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:55:01.367080 | orchestrator | Wednesday 18 February 2026 03:55:00 +0000 (0:00:01.330) 0:01:16.917 **** 2026-02-18 03:55:01.367088 | orchestrator | =============================================================================== 2026-02-18 03:55:01.367096 | orchestrator | Create admin user ------------------------------------------------------ 41.08s 2026-02-18 03:55:01.367119 | orchestrator | Restart ceph manager service ------------------------------------------- 24.46s 2026-02-18 03:55:01.367128 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.00s 2026-02-18 03:55:01.367136 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.93s 2026-02-18 03:55:01.367145 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.23s 2026-02-18 03:55:01.367153 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.21s 2026-02-18 03:55:01.367162 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.21s 2026-02-18 03:55:01.367170 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.14s 2026-02-18 03:55:01.367178 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2026-02-18 03:55:01.367187 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.08s 2026-02-18 03:55:01.367195 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-02-18 03:55:01.718813 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-18 03:55:03.840557 | orchestrator | 2026-02-18 03:55:03 | INFO  | Task 1c87e06f-e37e-4c22-a69b-9ca4ceaa4a71 (keystone) was prepared for execution. 2026-02-18 03:55:03.840642 | orchestrator | 2026-02-18 03:55:03 | INFO  | It takes a moment until task 1c87e06f-e37e-4c22-a69b-9ca4ceaa4a71 (keystone) has been started and output is visible here. 2026-02-18 03:55:11.784215 | orchestrator | 2026-02-18 03:55:11.784358 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:55:11.784376 | orchestrator | 2026-02-18 03:55:11.784389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:55:11.784419 | orchestrator | Wednesday 18 February 2026 03:55:08 +0000 (0:00:00.332) 0:00:00.332 **** 2026-02-18 03:55:11.784478 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:55:11.784503 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:55:11.784521 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:55:11.784540 | orchestrator | 2026-02-18 03:55:11.784557 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:55:11.784574 | orchestrator | Wednesday 18 February 2026 03:55:08 +0000 (0:00:00.331) 0:00:00.663 **** 2026-02-18 03:55:11.784620 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-18 03:55:11.784640 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-18 03:55:11.784659 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-18 03:55:11.784675 | orchestrator | 2026-02-18 03:55:11.784686 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-18 03:55:11.784696 | orchestrator | 2026-02-18 03:55:11.784707 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-18 03:55:11.784718 | orchestrator | Wednesday 18 February 2026 03:55:09 +0000 (0:00:00.495) 0:00:01.159 **** 2026-02-18 03:55:11.784729 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:55:11.784741 | orchestrator | 2026-02-18 03:55:11.784754 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-18 03:55:11.784766 | orchestrator | Wednesday 18 February 2026 03:55:09 +0000 (0:00:00.593) 0:00:01.753 **** 2026-02-18 03:55:11.784786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:11.784806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:11.784851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:11.784877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:11.784892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:11.784904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:11.784917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:11.784930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:11.784943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:11.784962 | orchestrator | 2026-02-18 03:55:11.784975 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-18 03:55:11.784995 | orchestrator | Wednesday 18 February 2026 03:55:11 +0000 (0:00:01.899) 0:00:03.652 **** 2026-02-18 03:55:17.740026 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:17.740152 | orchestrator | 2026-02-18 03:55:17.740175 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-18 03:55:17.740213 | orchestrator | Wednesday 18 February 2026 03:55:12 +0000 (0:00:00.332) 0:00:03.984 **** 2026-02-18 03:55:17.740233 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:17.740250 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:17.740265 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:17.740281 | orchestrator | 2026-02-18 03:55:17.740297 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-18 03:55:17.740313 | orchestrator | Wednesday 18 February 2026 03:55:12 +0000 (0:00:00.317) 0:00:04.302 **** 2026-02-18 03:55:17.740329 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 03:55:17.740346 | orchestrator | 2026-02-18 03:55:17.740362 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-18 03:55:17.740379 | orchestrator | Wednesday 18 February 2026 03:55:13 +0000 (0:00:00.901) 0:00:05.203 **** 2026-02-18 03:55:17.740396 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:55:17.740412 | orchestrator | 2026-02-18 03:55:17.740501 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-18 03:55:17.740521 | orchestrator | Wednesday 18 February 2026 03:55:13 +0000 (0:00:00.611) 0:00:05.815 **** 2026-02-18 03:55:17.740547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:17.740571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:17.740592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:17.740671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:17.740694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:17.740712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:17.740730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:17.740748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:17.740777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:17.740794 | orchestrator | 2026-02-18 03:55:17.740812 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-18 03:55:17.740828 | orchestrator | Wednesday 18 February 2026 03:55:17 +0000 (0:00:03.215) 0:00:09.030 **** 2026-02-18 03:55:17.740858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:18.619997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:18.620091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:18.620105 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:18.620119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:18.620146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:18.620161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:18.620171 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:18.620199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:18.620210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:18.620219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:18.620235 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:18.620244 | orchestrator | 2026-02-18 03:55:18.620254 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-18 03:55:18.620264 | orchestrator | Wednesday 18 February 2026 03:55:17 +0000 (0:00:00.587) 0:00:09.617 **** 2026-02-18 03:55:18.620274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:18.620288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:18.620305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:21.996942 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:21.997072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:21.997097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:21.997143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:21.997202 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:21.997227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:21.997237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:21.997260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:21.997269 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:21.997276 | orchestrator | 2026-02-18 03:55:21.997284 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-18 03:55:21.997293 | orchestrator | Wednesday 18 February 2026 03:55:18 +0000 (0:00:00.878) 0:00:10.496 **** 2026-02-18 03:55:21.997301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:21.997316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:21.997329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:21.997344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:26.795292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:26.795523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:55:26.795546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:26.795559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:26.795592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:26.795606 | orchestrator | 2026-02-18 03:55:26.795627 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-18 03:55:26.795648 | orchestrator | Wednesday 18 February 2026 03:55:21 +0000 (0:00:03.380) 0:00:13.877 **** 2026-02-18 03:55:26.795744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:26.795776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:26.795817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:26.795838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:26.795859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:26.795883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:30.618268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:30.618407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:30.618463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:55:30.618473 | orchestrator | 2026-02-18 03:55:30.618481 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-18 03:55:30.618490 | orchestrator | Wednesday 18 February 2026 03:55:26 +0000 (0:00:04.791) 0:00:18.669 **** 2026-02-18 03:55:30.618498 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:55:30.618506 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:55:30.618512 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:55:30.618519 | orchestrator | 2026-02-18 03:55:30.618526 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-18 03:55:30.618532 | orchestrator | Wednesday 18 February 2026 03:55:28 +0000 (0:00:01.468) 0:00:20.137 **** 2026-02-18 03:55:30.618539 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:30.618545 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:30.618552 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:30.618558 | orchestrator | 2026-02-18 03:55:30.618565 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-18 03:55:30.618572 | orchestrator | Wednesday 18 February 2026 03:55:29 +0000 (0:00:00.791) 0:00:20.928 **** 2026-02-18 03:55:30.618578 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:30.618585 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:30.618591 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:30.618598 | orchestrator | 2026-02-18 03:55:30.618620 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-18 03:55:30.618627 | orchestrator | Wednesday 18 February 2026 03:55:29 +0000 (0:00:00.564) 0:00:21.493 **** 2026-02-18 03:55:30.618633 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:30.618640 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:30.618647 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:30.618653 | orchestrator | 2026-02-18 03:55:30.618661 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-18 03:55:30.618668 | orchestrator | Wednesday 18 February 2026 03:55:29 +0000 (0:00:00.348) 0:00:21.841 **** 2026-02-18 03:55:30.618695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:30.618710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:30.618718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:30.618725 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:30.618732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:30.618744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:30.618751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:30.618767 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:30.618781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-18 03:55:50.131574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 03:55:50.131739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 03:55:50.131770 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:50.131793 | orchestrator | 2026-02-18 03:55:50.131816 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-18 03:55:50.131840 | orchestrator | Wednesday 18 February 2026 03:55:30 +0000 (0:00:00.654) 0:00:22.495 **** 2026-02-18 03:55:50.131860 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:50.131880 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:50.131901 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:50.131921 | orchestrator | 2026-02-18 03:55:50.131941 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-18 03:55:50.131964 | orchestrator | Wednesday 18 February 2026 03:55:30 +0000 (0:00:00.312) 0:00:22.808 **** 2026-02-18 03:55:50.131985 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-18 03:55:50.132007 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-18 03:55:50.132060 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-18 03:55:50.132082 | orchestrator | 2026-02-18 03:55:50.132121 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-18 03:55:50.132143 | orchestrator | Wednesday 18 February 2026 03:55:32 +0000 (0:00:01.855) 0:00:24.663 **** 2026-02-18 03:55:50.132164 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 03:55:50.132184 | orchestrator | 2026-02-18 03:55:50.132205 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-18 03:55:50.132225 | orchestrator | Wednesday 18 February 2026 03:55:33 +0000 (0:00:01.021) 0:00:25.684 **** 2026-02-18 03:55:50.132246 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:55:50.132266 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:55:50.132286 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:55:50.132307 | orchestrator | 2026-02-18 03:55:50.132327 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-18 03:55:50.132348 | orchestrator | Wednesday 18 February 2026 03:55:34 +0000 (0:00:00.611) 0:00:26.296 **** 2026-02-18 03:55:50.132367 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 03:55:50.132387 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 03:55:50.132422 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 03:55:50.132433 | orchestrator | 2026-02-18 03:55:50.132444 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-18 03:55:50.132456 | orchestrator | Wednesday 18 February 2026 03:55:35 +0000 (0:00:01.086) 0:00:27.383 **** 2026-02-18 03:55:50.132467 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:55:50.132478 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:55:50.132489 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:55:50.132500 | orchestrator | 2026-02-18 03:55:50.132510 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-18 03:55:50.132521 | orchestrator | Wednesday 18 February 2026 03:55:36 +0000 (0:00:00.560) 0:00:27.944 **** 2026-02-18 03:55:50.132532 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-18 03:55:50.132543 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-18 03:55:50.132554 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-18 03:55:50.132565 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-18 03:55:50.132576 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-18 03:55:50.132587 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-18 03:55:50.132598 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-18 03:55:50.132609 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-18 03:55:50.132641 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-18 03:55:50.132652 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-18 03:55:50.132663 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-18 03:55:50.132675 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-18 03:55:50.132696 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-18 03:55:50.132721 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-18 03:55:50.132746 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-18 03:55:50.132763 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-18 03:55:50.132797 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-18 03:55:50.132816 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-18 03:55:50.132836 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-18 03:55:50.132857 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-18 03:55:50.132876 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-18 03:55:50.132895 | orchestrator | 2026-02-18 03:55:50.132912 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-18 03:55:50.132923 | orchestrator | Wednesday 18 February 2026 03:55:45 +0000 (0:00:09.135) 0:00:37.079 **** 2026-02-18 03:55:50.132934 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-18 03:55:50.132944 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-18 03:55:50.132955 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-18 03:55:50.132965 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-18 03:55:50.132976 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-18 03:55:50.132986 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-18 03:55:50.132997 | orchestrator | 2026-02-18 03:55:50.133007 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-18 03:55:50.133026 | orchestrator | Wednesday 18 February 2026 03:55:47 +0000 (0:00:02.635) 0:00:39.715 **** 2026-02-18 03:55:50.133048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:55:50.133087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:57:30.354909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-18 03:57:30.355035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:57:30.355063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:57:30.355073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-18 03:57:30.355081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:57:30.355160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:57:30.355179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-18 03:57:30.355188 | orchestrator | 2026-02-18 03:57:30.355198 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-18 03:57:30.355207 | orchestrator | Wednesday 18 February 2026 03:55:50 +0000 (0:00:02.291) 0:00:42.007 **** 2026-02-18 03:57:30.355215 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:57:30.355224 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:57:30.355232 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:57:30.355240 | orchestrator | 2026-02-18 03:57:30.355248 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-18 03:57:30.355256 | orchestrator | Wednesday 18 February 2026 03:55:50 +0000 (0:00:00.571) 0:00:42.578 **** 2026-02-18 03:57:30.355263 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:57:30.355271 | orchestrator | 2026-02-18 03:57:30.355279 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-18 03:57:30.355287 | orchestrator | Wednesday 18 February 2026 03:55:52 +0000 (0:00:02.307) 0:00:44.886 **** 2026-02-18 03:57:30.355294 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:57:30.355302 | orchestrator | 2026-02-18 03:57:30.355370 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-18 03:57:30.355382 | orchestrator | Wednesday 18 February 2026 03:55:55 +0000 (0:00:02.329) 0:00:47.216 **** 2026-02-18 03:57:30.355391 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:57:30.355401 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:57:30.355412 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:57:30.355425 | orchestrator | 2026-02-18 03:57:30.355445 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-18 03:57:30.355459 | orchestrator | Wednesday 18 February 2026 03:55:56 +0000 (0:00:00.901) 0:00:48.118 **** 2026-02-18 03:57:30.355513 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:57:30.355526 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:57:30.355539 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:57:30.355551 | orchestrator | 2026-02-18 03:57:30.355565 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-18 03:57:30.355588 | orchestrator | Wednesday 18 February 2026 03:55:56 +0000 (0:00:00.328) 0:00:48.446 **** 2026-02-18 03:57:30.355602 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:57:30.355615 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:57:30.355629 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:57:30.355642 | orchestrator | 2026-02-18 03:57:30.355654 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-18 03:57:30.355667 | orchestrator | Wednesday 18 February 2026 03:55:57 +0000 (0:00:00.559) 0:00:49.006 **** 2026-02-18 03:57:30.355681 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:57:30.355764 | orchestrator | 2026-02-18 03:57:30.355780 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-18 03:57:30.355793 | orchestrator | Wednesday 18 February 2026 03:56:12 +0000 (0:00:15.185) 0:01:04.192 **** 2026-02-18 03:57:30.355805 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:57:30.355817 | orchestrator | 2026-02-18 03:57:30.355829 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-18 03:57:30.355843 | orchestrator | Wednesday 18 February 2026 03:56:23 +0000 (0:00:11.230) 0:01:15.422 **** 2026-02-18 03:57:30.355871 | orchestrator | 2026-02-18 03:57:30.355886 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-18 03:57:30.355895 | orchestrator | Wednesday 18 February 2026 03:56:23 +0000 (0:00:00.089) 0:01:15.512 **** 2026-02-18 03:57:30.355902 | orchestrator | 2026-02-18 03:57:30.355910 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-18 03:57:30.355917 | orchestrator | Wednesday 18 February 2026 03:56:23 +0000 (0:00:00.084) 0:01:15.596 **** 2026-02-18 03:57:30.355925 | orchestrator | 2026-02-18 03:57:30.355933 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-18 03:57:30.355941 | orchestrator | Wednesday 18 February 2026 03:56:23 +0000 (0:00:00.074) 0:01:15.670 **** 2026-02-18 03:57:30.355948 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:57:30.355956 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:57:30.355964 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:57:30.356003 | orchestrator | 2026-02-18 03:57:30.356012 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-18 03:57:30.356019 | orchestrator | Wednesday 18 February 2026 03:57:11 +0000 (0:00:47.538) 0:02:03.209 **** 2026-02-18 03:57:30.356027 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:57:30.356035 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:57:30.356043 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:57:30.356050 | orchestrator | 2026-02-18 03:57:30.356058 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-18 03:57:30.356066 | orchestrator | Wednesday 18 February 2026 03:57:22 +0000 (0:00:10.853) 0:02:14.063 **** 2026-02-18 03:57:30.356073 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:57:30.356081 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:57:30.356089 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:57:30.356097 | orchestrator | 2026-02-18 03:57:30.356104 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-18 03:57:30.356112 | orchestrator | Wednesday 18 February 2026 03:57:29 +0000 (0:00:07.573) 0:02:21.637 **** 2026-02-18 03:57:30.356132 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:58:21.305082 | orchestrator | 2026-02-18 03:58:21.305201 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-18 03:58:21.305219 | orchestrator | Wednesday 18 February 2026 03:57:30 +0000 (0:00:00.596) 0:02:22.234 **** 2026-02-18 03:58:21.305232 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:58:21.305245 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:58:21.305257 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:58:21.305268 | orchestrator | 2026-02-18 03:58:21.305321 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-18 03:58:21.305333 | orchestrator | Wednesday 18 February 2026 03:57:31 +0000 (0:00:01.169) 0:02:23.403 **** 2026-02-18 03:58:21.305345 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:58:21.305357 | orchestrator | 2026-02-18 03:58:21.305368 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-18 03:58:21.305379 | orchestrator | Wednesday 18 February 2026 03:57:33 +0000 (0:00:01.847) 0:02:25.251 **** 2026-02-18 03:58:21.305390 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-18 03:58:21.305401 | orchestrator | 2026-02-18 03:58:21.305412 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-18 03:58:21.305423 | orchestrator | Wednesday 18 February 2026 03:57:45 +0000 (0:00:11.893) 0:02:37.144 **** 2026-02-18 03:58:21.305434 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-18 03:58:21.305445 | orchestrator | 2026-02-18 03:58:21.305455 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-18 03:58:21.305466 | orchestrator | Wednesday 18 February 2026 03:58:09 +0000 (0:00:24.124) 0:03:01.269 **** 2026-02-18 03:58:21.305477 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-18 03:58:21.305513 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-18 03:58:21.305525 | orchestrator | 2026-02-18 03:58:21.305535 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-18 03:58:21.305546 | orchestrator | Wednesday 18 February 2026 03:58:15 +0000 (0:00:06.386) 0:03:07.655 **** 2026-02-18 03:58:21.305557 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:58:21.305568 | orchestrator | 2026-02-18 03:58:21.305578 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-18 03:58:21.305589 | orchestrator | Wednesday 18 February 2026 03:58:15 +0000 (0:00:00.137) 0:03:07.793 **** 2026-02-18 03:58:21.305600 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:58:21.305612 | orchestrator | 2026-02-18 03:58:21.305625 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-18 03:58:21.305637 | orchestrator | Wednesday 18 February 2026 03:58:16 +0000 (0:00:00.129) 0:03:07.923 **** 2026-02-18 03:58:21.305649 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:58:21.305661 | orchestrator | 2026-02-18 03:58:21.305689 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-18 03:58:21.305702 | orchestrator | Wednesday 18 February 2026 03:58:16 +0000 (0:00:00.155) 0:03:08.079 **** 2026-02-18 03:58:21.305714 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:58:21.305726 | orchestrator | 2026-02-18 03:58:21.305739 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-18 03:58:21.305752 | orchestrator | Wednesday 18 February 2026 03:58:16 +0000 (0:00:00.605) 0:03:08.684 **** 2026-02-18 03:58:21.305765 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:58:21.305777 | orchestrator | 2026-02-18 03:58:21.305789 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-18 03:58:21.305802 | orchestrator | Wednesday 18 February 2026 03:58:20 +0000 (0:00:03.554) 0:03:12.238 **** 2026-02-18 03:58:21.305814 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:58:21.305827 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:58:21.305839 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:58:21.305851 | orchestrator | 2026-02-18 03:58:21.305864 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:58:21.305877 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 03:58:21.305891 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-18 03:58:21.305902 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-18 03:58:21.305913 | orchestrator | 2026-02-18 03:58:21.305923 | orchestrator | 2026-02-18 03:58:21.305935 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:58:21.305945 | orchestrator | Wednesday 18 February 2026 03:58:20 +0000 (0:00:00.500) 0:03:12.738 **** 2026-02-18 03:58:21.305956 | orchestrator | =============================================================================== 2026-02-18 03:58:21.305967 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 47.54s 2026-02-18 03:58:21.305977 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.12s 2026-02-18 03:58:21.305987 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.19s 2026-02-18 03:58:21.305998 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.89s 2026-02-18 03:58:21.306009 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.23s 2026-02-18 03:58:21.306084 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.85s 2026-02-18 03:58:21.306096 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.14s 2026-02-18 03:58:21.306107 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.57s 2026-02-18 03:58:21.306127 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.39s 2026-02-18 03:58:21.306156 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.79s 2026-02-18 03:58:21.306168 | orchestrator | keystone : Creating default user role ----------------------------------- 3.55s 2026-02-18 03:58:21.306179 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.38s 2026-02-18 03:58:21.306190 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.22s 2026-02-18 03:58:21.306201 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.64s 2026-02-18 03:58:21.306212 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.33s 2026-02-18 03:58:21.306222 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.31s 2026-02-18 03:58:21.306233 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2026-02-18 03:58:21.306244 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.90s 2026-02-18 03:58:21.306255 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.86s 2026-02-18 03:58:21.306266 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2026-02-18 03:58:23.855333 | orchestrator | 2026-02-18 03:58:23 | INFO  | Task e0453bed-6d85-4a15-962b-2f5427646db8 (placement) was prepared for execution. 2026-02-18 03:58:23.855510 | orchestrator | 2026-02-18 03:58:23 | INFO  | It takes a moment until task e0453bed-6d85-4a15-962b-2f5427646db8 (placement) has been started and output is visible here. 2026-02-18 03:59:00.955007 | orchestrator | 2026-02-18 03:59:00.955137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 03:59:00.955156 | orchestrator | 2026-02-18 03:59:00.955168 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 03:59:00.955180 | orchestrator | Wednesday 18 February 2026 03:58:28 +0000 (0:00:00.272) 0:00:00.272 **** 2026-02-18 03:59:00.955191 | orchestrator | ok: [testbed-node-0] 2026-02-18 03:59:00.955203 | orchestrator | ok: [testbed-node-1] 2026-02-18 03:59:00.955214 | orchestrator | ok: [testbed-node-2] 2026-02-18 03:59:00.956149 | orchestrator | 2026-02-18 03:59:00.956217 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 03:59:00.956231 | orchestrator | Wednesday 18 February 2026 03:58:28 +0000 (0:00:00.329) 0:00:00.601 **** 2026-02-18 03:59:00.956239 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-18 03:59:00.956246 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-18 03:59:00.956275 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-18 03:59:00.956282 | orchestrator | 2026-02-18 03:59:00.956304 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-18 03:59:00.956310 | orchestrator | 2026-02-18 03:59:00.956317 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-18 03:59:00.956323 | orchestrator | Wednesday 18 February 2026 03:58:29 +0000 (0:00:00.477) 0:00:01.079 **** 2026-02-18 03:59:00.956330 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:59:00.956337 | orchestrator | 2026-02-18 03:59:00.956343 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-18 03:59:00.956349 | orchestrator | Wednesday 18 February 2026 03:58:29 +0000 (0:00:00.566) 0:00:01.646 **** 2026-02-18 03:59:00.956356 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-18 03:59:00.956362 | orchestrator | 2026-02-18 03:59:00.956368 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-18 03:59:00.956374 | orchestrator | Wednesday 18 February 2026 03:58:33 +0000 (0:00:04.254) 0:00:05.900 **** 2026-02-18 03:59:00.956381 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-18 03:59:00.956407 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-18 03:59:00.956414 | orchestrator | 2026-02-18 03:59:00.956420 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-18 03:59:00.956426 | orchestrator | Wednesday 18 February 2026 03:58:40 +0000 (0:00:06.977) 0:00:12.877 **** 2026-02-18 03:59:00.956432 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-18 03:59:00.956438 | orchestrator | 2026-02-18 03:59:00.956444 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-18 03:59:00.956451 | orchestrator | Wednesday 18 February 2026 03:58:44 +0000 (0:00:03.718) 0:00:16.596 **** 2026-02-18 03:59:00.956457 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 03:59:00.956463 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-18 03:59:00.956469 | orchestrator | 2026-02-18 03:59:00.956475 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-18 03:59:00.956481 | orchestrator | Wednesday 18 February 2026 03:58:48 +0000 (0:00:04.200) 0:00:20.797 **** 2026-02-18 03:59:00.956487 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 03:59:00.956494 | orchestrator | 2026-02-18 03:59:00.956500 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-18 03:59:00.956506 | orchestrator | Wednesday 18 February 2026 03:58:52 +0000 (0:00:03.364) 0:00:24.161 **** 2026-02-18 03:59:00.956512 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-18 03:59:00.956518 | orchestrator | 2026-02-18 03:59:00.956524 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-18 03:59:00.956530 | orchestrator | Wednesday 18 February 2026 03:58:56 +0000 (0:00:04.315) 0:00:28.476 **** 2026-02-18 03:59:00.956537 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:59:00.956543 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:59:00.956549 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:59:00.956555 | orchestrator | 2026-02-18 03:59:00.956561 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-18 03:59:00.956567 | orchestrator | Wednesday 18 February 2026 03:58:56 +0000 (0:00:00.353) 0:00:28.829 **** 2026-02-18 03:59:00.956576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:00.956613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:00.956638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:00.956645 | orchestrator | 2026-02-18 03:59:00.956651 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-18 03:59:00.956658 | orchestrator | Wednesday 18 February 2026 03:58:57 +0000 (0:00:01.101) 0:00:29.931 **** 2026-02-18 03:59:00.956664 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:59:00.956670 | orchestrator | 2026-02-18 03:59:00.956676 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-18 03:59:00.956682 | orchestrator | Wednesday 18 February 2026 03:58:58 +0000 (0:00:00.376) 0:00:30.308 **** 2026-02-18 03:59:00.956688 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:59:00.956694 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:59:00.956700 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:59:00.956706 | orchestrator | 2026-02-18 03:59:00.956712 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-18 03:59:00.956719 | orchestrator | Wednesday 18 February 2026 03:58:58 +0000 (0:00:00.343) 0:00:30.651 **** 2026-02-18 03:59:00.956725 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 03:59:00.956731 | orchestrator | 2026-02-18 03:59:00.956737 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-18 03:59:00.956743 | orchestrator | Wednesday 18 February 2026 03:58:59 +0000 (0:00:00.667) 0:00:31.319 **** 2026-02-18 03:59:00.956750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:00.956764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:03.799110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:03.799211 | orchestrator | 2026-02-18 03:59:03.799226 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-18 03:59:03.799238 | orchestrator | Wednesday 18 February 2026 03:59:00 +0000 (0:00:01.691) 0:00:33.011 **** 2026-02-18 03:59:03.799330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:03.799345 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:59:03.799356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:03.799366 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:59:03.799377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:03.799408 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:59:03.799419 | orchestrator | 2026-02-18 03:59:03.799429 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-18 03:59:03.799456 | orchestrator | Wednesday 18 February 2026 03:59:01 +0000 (0:00:00.511) 0:00:33.522 **** 2026-02-18 03:59:03.799474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:03.799485 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:59:03.799495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:03.799505 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:59:03.799515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:03.799525 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:59:03.799535 | orchestrator | 2026-02-18 03:59:03.799545 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-18 03:59:03.799554 | orchestrator | Wednesday 18 February 2026 03:59:02 +0000 (0:00:00.708) 0:00:34.231 **** 2026-02-18 03:59:03.799564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:03.799596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:11.178483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:11.178602 | orchestrator | 2026-02-18 03:59:11.178617 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-18 03:59:11.178628 | orchestrator | Wednesday 18 February 2026 03:59:03 +0000 (0:00:01.624) 0:00:35.855 **** 2026-02-18 03:59:11.178638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:11.179375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:11.179424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:11.179435 | orchestrator | 2026-02-18 03:59:11.179444 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-18 03:59:11.179452 | orchestrator | Wednesday 18 February 2026 03:59:06 +0000 (0:00:02.643) 0:00:38.499 **** 2026-02-18 03:59:11.179478 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-18 03:59:11.179488 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-18 03:59:11.179497 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-18 03:59:11.179505 | orchestrator | 2026-02-18 03:59:11.179514 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-18 03:59:11.179523 | orchestrator | Wednesday 18 February 2026 03:59:07 +0000 (0:00:01.514) 0:00:40.014 **** 2026-02-18 03:59:11.179531 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:59:11.179541 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:59:11.179550 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:59:11.179559 | orchestrator | 2026-02-18 03:59:11.179567 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-18 03:59:11.179576 | orchestrator | Wednesday 18 February 2026 03:59:09 +0000 (0:00:01.347) 0:00:41.361 **** 2026-02-18 03:59:11.179585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:11.179594 | orchestrator | skipping: [testbed-node-0] 2026-02-18 03:59:11.179603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:11.179619 | orchestrator | skipping: [testbed-node-1] 2026-02-18 03:59:11.179629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-18 03:59:11.179638 | orchestrator | skipping: [testbed-node-2] 2026-02-18 03:59:11.179646 | orchestrator | 2026-02-18 03:59:11.179655 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-18 03:59:11.179668 | orchestrator | Wednesday 18 February 2026 03:59:10 +0000 (0:00:00.774) 0:00:42.136 **** 2026-02-18 03:59:11.179685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:36.853629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:36.853764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-18 03:59:36.853779 | orchestrator | 2026-02-18 03:59:36.853790 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-18 03:59:36.853801 | orchestrator | Wednesday 18 February 2026 03:59:11 +0000 (0:00:01.091) 0:00:43.227 **** 2026-02-18 03:59:36.853810 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:59:36.853819 | orchestrator | 2026-02-18 03:59:36.853828 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-18 03:59:36.853836 | orchestrator | Wednesday 18 February 2026 03:59:13 +0000 (0:00:02.294) 0:00:45.522 **** 2026-02-18 03:59:36.853845 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:59:36.853854 | orchestrator | 2026-02-18 03:59:36.853862 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-18 03:59:36.853871 | orchestrator | Wednesday 18 February 2026 03:59:15 +0000 (0:00:02.250) 0:00:47.772 **** 2026-02-18 03:59:36.853880 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:59:36.853888 | orchestrator | 2026-02-18 03:59:36.853896 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-18 03:59:36.853905 | orchestrator | Wednesday 18 February 2026 03:59:30 +0000 (0:00:14.841) 0:01:02.614 **** 2026-02-18 03:59:36.853914 | orchestrator | 2026-02-18 03:59:36.853922 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-18 03:59:36.853931 | orchestrator | Wednesday 18 February 2026 03:59:30 +0000 (0:00:00.087) 0:01:02.701 **** 2026-02-18 03:59:36.853939 | orchestrator | 2026-02-18 03:59:36.853947 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-18 03:59:36.853956 | orchestrator | Wednesday 18 February 2026 03:59:30 +0000 (0:00:00.074) 0:01:02.775 **** 2026-02-18 03:59:36.853964 | orchestrator | 2026-02-18 03:59:36.853973 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-18 03:59:36.853982 | orchestrator | Wednesday 18 February 2026 03:59:30 +0000 (0:00:00.072) 0:01:02.848 **** 2026-02-18 03:59:36.853990 | orchestrator | changed: [testbed-node-0] 2026-02-18 03:59:36.854011 | orchestrator | changed: [testbed-node-2] 2026-02-18 03:59:36.854056 | orchestrator | changed: [testbed-node-1] 2026-02-18 03:59:36.854066 | orchestrator | 2026-02-18 03:59:36.854074 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 03:59:36.854084 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 03:59:36.854094 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 03:59:36.854103 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 03:59:36.854111 | orchestrator | 2026-02-18 03:59:36.854120 | orchestrator | 2026-02-18 03:59:36.854129 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 03:59:36.854137 | orchestrator | Wednesday 18 February 2026 03:59:36 +0000 (0:00:05.675) 0:01:08.523 **** 2026-02-18 03:59:36.854153 | orchestrator | =============================================================================== 2026-02-18 03:59:36.854162 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.84s 2026-02-18 03:59:36.854186 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.98s 2026-02-18 03:59:36.854197 | orchestrator | placement : Restart placement-api container ----------------------------- 5.68s 2026-02-18 03:59:36.854208 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.32s 2026-02-18 03:59:36.854218 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.25s 2026-02-18 03:59:36.854261 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.20s 2026-02-18 03:59:36.854272 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.72s 2026-02-18 03:59:36.854282 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.36s 2026-02-18 03:59:36.854292 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.64s 2026-02-18 03:59:36.854302 | orchestrator | placement : Creating placement databases -------------------------------- 2.29s 2026-02-18 03:59:36.854312 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.25s 2026-02-18 03:59:36.854322 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.69s 2026-02-18 03:59:36.854332 | orchestrator | placement : Copying over config.json files for services ----------------- 1.62s 2026-02-18 03:59:36.854341 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.51s 2026-02-18 03:59:36.854349 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.35s 2026-02-18 03:59:36.854358 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.10s 2026-02-18 03:59:36.854367 | orchestrator | placement : Check placement containers ---------------------------------- 1.09s 2026-02-18 03:59:36.854375 | orchestrator | placement : Copying over existing policy file --------------------------- 0.77s 2026-02-18 03:59:36.854384 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2026-02-18 03:59:36.854393 | orchestrator | placement : include_tasks ----------------------------------------------- 0.67s 2026-02-18 03:59:39.329981 | orchestrator | 2026-02-18 03:59:39 | INFO  | Task c5cff2af-a931-4bb7-9e2d-12a3c269ca44 (neutron) was prepared for execution. 2026-02-18 03:59:39.330106 | orchestrator | 2026-02-18 03:59:39 | INFO  | It takes a moment until task c5cff2af-a931-4bb7-9e2d-12a3c269ca44 (neutron) has been started and output is visible here. 2026-02-18 04:00:29.868021 | orchestrator | 2026-02-18 04:00:29.868220 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:00:29.868252 | orchestrator | 2026-02-18 04:00:29.868330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:00:29.868356 | orchestrator | Wednesday 18 February 2026 03:59:43 +0000 (0:00:00.264) 0:00:00.264 **** 2026-02-18 04:00:29.868377 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:00:29.868398 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:00:29.868418 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:00:29.868438 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:00:29.868459 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:00:29.868478 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:00:29.868498 | orchestrator | 2026-02-18 04:00:29.868518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:00:29.868538 | orchestrator | Wednesday 18 February 2026 03:59:44 +0000 (0:00:00.707) 0:00:00.971 **** 2026-02-18 04:00:29.868559 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-18 04:00:29.868579 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-18 04:00:29.868599 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-18 04:00:29.868619 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-18 04:00:29.868638 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-18 04:00:29.868692 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-18 04:00:29.868713 | orchestrator | 2026-02-18 04:00:29.868732 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-18 04:00:29.868749 | orchestrator | 2026-02-18 04:00:29.868767 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-18 04:00:29.868784 | orchestrator | Wednesday 18 February 2026 03:59:45 +0000 (0:00:00.611) 0:00:01.583 **** 2026-02-18 04:00:29.868823 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:00:29.868843 | orchestrator | 2026-02-18 04:00:29.868859 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-18 04:00:29.868878 | orchestrator | Wednesday 18 February 2026 03:59:46 +0000 (0:00:01.200) 0:00:02.784 **** 2026-02-18 04:00:29.868896 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:00:29.868915 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:00:29.868935 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:00:29.868954 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:00:29.868973 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:00:29.868991 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:00:29.869011 | orchestrator | 2026-02-18 04:00:29.869032 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-18 04:00:29.869052 | orchestrator | Wednesday 18 February 2026 03:59:47 +0000 (0:00:01.356) 0:00:04.140 **** 2026-02-18 04:00:29.869071 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:00:29.869090 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:00:29.869109 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:00:29.869128 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:00:29.869148 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:00:29.869168 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:00:29.869187 | orchestrator | 2026-02-18 04:00:29.869277 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-18 04:00:29.869297 | orchestrator | Wednesday 18 February 2026 03:59:48 +0000 (0:00:01.032) 0:00:05.173 **** 2026-02-18 04:00:29.869315 | orchestrator | ok: [testbed-node-0] => { 2026-02-18 04:00:29.869335 | orchestrator |  "changed": false, 2026-02-18 04:00:29.869354 | orchestrator |  "msg": "All assertions passed" 2026-02-18 04:00:29.869373 | orchestrator | } 2026-02-18 04:00:29.869392 | orchestrator | ok: [testbed-node-1] => { 2026-02-18 04:00:29.869412 | orchestrator |  "changed": false, 2026-02-18 04:00:29.869432 | orchestrator |  "msg": "All assertions passed" 2026-02-18 04:00:29.869451 | orchestrator | } 2026-02-18 04:00:29.869471 | orchestrator | ok: [testbed-node-2] => { 2026-02-18 04:00:29.869490 | orchestrator |  "changed": false, 2026-02-18 04:00:29.869509 | orchestrator |  "msg": "All assertions passed" 2026-02-18 04:00:29.869528 | orchestrator | } 2026-02-18 04:00:29.869547 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 04:00:29.869566 | orchestrator |  "changed": false, 2026-02-18 04:00:29.869585 | orchestrator |  "msg": "All assertions passed" 2026-02-18 04:00:29.869604 | orchestrator | } 2026-02-18 04:00:29.869623 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 04:00:29.869642 | orchestrator |  "changed": false, 2026-02-18 04:00:29.869662 | orchestrator |  "msg": "All assertions passed" 2026-02-18 04:00:29.869682 | orchestrator | } 2026-02-18 04:00:29.869701 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 04:00:29.869720 | orchestrator |  "changed": false, 2026-02-18 04:00:29.869740 | orchestrator |  "msg": "All assertions passed" 2026-02-18 04:00:29.869759 | orchestrator | } 2026-02-18 04:00:29.869778 | orchestrator | 2026-02-18 04:00:29.869797 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-18 04:00:29.869816 | orchestrator | Wednesday 18 February 2026 03:59:49 +0000 (0:00:00.785) 0:00:05.958 **** 2026-02-18 04:00:29.869835 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:29.869854 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:29.869874 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:29.869914 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:29.869934 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:29.869953 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:29.869972 | orchestrator | 2026-02-18 04:00:29.869992 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-18 04:00:29.870011 | orchestrator | Wednesday 18 February 2026 03:59:50 +0000 (0:00:00.601) 0:00:06.559 **** 2026-02-18 04:00:29.870125 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-18 04:00:29.870146 | orchestrator | 2026-02-18 04:00:29.870165 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-18 04:00:29.870184 | orchestrator | Wednesday 18 February 2026 03:59:54 +0000 (0:00:04.424) 0:00:10.984 **** 2026-02-18 04:00:29.870230 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-18 04:00:29.870251 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-18 04:00:29.870271 | orchestrator | 2026-02-18 04:00:29.870323 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-18 04:00:29.870361 | orchestrator | Wednesday 18 February 2026 04:00:01 +0000 (0:00:07.070) 0:00:18.054 **** 2026-02-18 04:00:29.870379 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:00:29.870412 | orchestrator | 2026-02-18 04:00:29.870431 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-18 04:00:29.870468 | orchestrator | Wednesday 18 February 2026 04:00:05 +0000 (0:00:03.413) 0:00:21.468 **** 2026-02-18 04:00:29.870500 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:00:29.870519 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-18 04:00:29.870538 | orchestrator | 2026-02-18 04:00:29.870557 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-18 04:00:29.870575 | orchestrator | Wednesday 18 February 2026 04:00:09 +0000 (0:00:03.985) 0:00:25.453 **** 2026-02-18 04:00:29.870593 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:00:29.870612 | orchestrator | 2026-02-18 04:00:29.870630 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-18 04:00:29.870649 | orchestrator | Wednesday 18 February 2026 04:00:12 +0000 (0:00:03.303) 0:00:28.757 **** 2026-02-18 04:00:29.870667 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-18 04:00:29.870684 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-18 04:00:29.870702 | orchestrator | 2026-02-18 04:00:29.870720 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-18 04:00:29.870738 | orchestrator | Wednesday 18 February 2026 04:00:20 +0000 (0:00:08.230) 0:00:36.987 **** 2026-02-18 04:00:29.870757 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:29.870775 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:29.870794 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:29.870812 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:29.870829 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:29.870861 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:29.870880 | orchestrator | 2026-02-18 04:00:29.870899 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-18 04:00:29.870917 | orchestrator | Wednesday 18 February 2026 04:00:21 +0000 (0:00:00.852) 0:00:37.840 **** 2026-02-18 04:00:29.870936 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:29.870954 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:29.870973 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:29.870993 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:29.871011 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:29.871029 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:29.871047 | orchestrator | 2026-02-18 04:00:29.871065 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-18 04:00:29.871084 | orchestrator | Wednesday 18 February 2026 04:00:23 +0000 (0:00:02.325) 0:00:40.166 **** 2026-02-18 04:00:29.871119 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:00:29.871137 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:00:29.871157 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:00:29.871173 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:00:29.871257 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:00:29.871282 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:00:29.871303 | orchestrator | 2026-02-18 04:00:29.871321 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-18 04:00:29.871339 | orchestrator | Wednesday 18 February 2026 04:00:24 +0000 (0:00:01.231) 0:00:41.398 **** 2026-02-18 04:00:29.871357 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:29.871377 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:29.871396 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:29.871414 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:29.871433 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:29.871452 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:29.871470 | orchestrator | 2026-02-18 04:00:29.871489 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-18 04:00:29.871510 | orchestrator | Wednesday 18 February 2026 04:00:27 +0000 (0:00:02.293) 0:00:43.691 **** 2026-02-18 04:00:29.871533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:29.871583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:35.083397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:35.083549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:35.083569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:35.083582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:35.083594 | orchestrator | 2026-02-18 04:00:35.083607 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-18 04:00:35.083620 | orchestrator | Wednesday 18 February 2026 04:00:29 +0000 (0:00:02.570) 0:00:46.261 **** 2026-02-18 04:00:35.083630 | orchestrator | [WARNING]: Skipped 2026-02-18 04:00:35.083643 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-18 04:00:35.083654 | orchestrator | due to this access issue: 2026-02-18 04:00:35.083666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-18 04:00:35.083676 | orchestrator | a directory 2026-02-18 04:00:35.083687 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:00:35.083698 | orchestrator | 2026-02-18 04:00:35.083709 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-18 04:00:35.083720 | orchestrator | Wednesday 18 February 2026 04:00:30 +0000 (0:00:00.808) 0:00:47.070 **** 2026-02-18 04:00:35.083732 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:00:35.083743 | orchestrator | 2026-02-18 04:00:35.083754 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-18 04:00:35.083781 | orchestrator | Wednesday 18 February 2026 04:00:31 +0000 (0:00:01.273) 0:00:48.343 **** 2026-02-18 04:00:35.083798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:35.083820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:35.083832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:35.083843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:35.083864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:39.891857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:39.891953 | orchestrator | 2026-02-18 04:00:39.891968 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-18 04:00:39.891979 | orchestrator | Wednesday 18 February 2026 04:00:35 +0000 (0:00:03.131) 0:00:51.474 **** 2026-02-18 04:00:39.891992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:39.892004 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:39.892015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:39.892025 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:39.892036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:39.892046 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:39.892072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:39.892105 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:39.892121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:39.892132 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:39.892142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:39.892152 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:39.892162 | orchestrator | 2026-02-18 04:00:39.892171 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-18 04:00:39.892181 | orchestrator | Wednesday 18 February 2026 04:00:37 +0000 (0:00:01.999) 0:00:53.473 **** 2026-02-18 04:00:39.892252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:39.892263 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:39.892280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:45.360561 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:45.360682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:45.360697 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:45.360706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:45.360714 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:45.360721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:45.360727 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:45.360734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:45.360758 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:45.360765 | orchestrator | 2026-02-18 04:00:45.360772 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-18 04:00:45.360780 | orchestrator | Wednesday 18 February 2026 04:00:39 +0000 (0:00:02.813) 0:00:56.287 **** 2026-02-18 04:00:45.360786 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:45.360792 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:45.360798 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:45.360821 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:45.360828 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:45.360834 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:45.360840 | orchestrator | 2026-02-18 04:00:45.360846 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-18 04:00:45.360852 | orchestrator | Wednesday 18 February 2026 04:00:42 +0000 (0:00:02.434) 0:00:58.722 **** 2026-02-18 04:00:45.360865 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:45.360872 | orchestrator | 2026-02-18 04:00:45.360878 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-18 04:00:45.360898 | orchestrator | Wednesday 18 February 2026 04:00:42 +0000 (0:00:00.140) 0:00:58.862 **** 2026-02-18 04:00:45.360904 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:45.360911 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:45.360917 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:45.360923 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:45.360929 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:45.360935 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:45.360941 | orchestrator | 2026-02-18 04:00:45.360947 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-18 04:00:45.360954 | orchestrator | Wednesday 18 February 2026 04:00:43 +0000 (0:00:00.620) 0:00:59.483 **** 2026-02-18 04:00:45.360965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:45.360971 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:45.360978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:45.360990 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:45.360996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:45.361003 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:45.361010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:45.361016 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:45.361031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:53.995351 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:53.995459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:53.995479 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:53.995491 | orchestrator | 2026-02-18 04:00:53.995503 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-18 04:00:53.995515 | orchestrator | Wednesday 18 February 2026 04:00:45 +0000 (0:00:02.267) 0:01:01.750 **** 2026-02-18 04:00:53.995528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:53.995563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:53.995576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:53.995620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:53.995635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:53.995653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:53.995665 | orchestrator | 2026-02-18 04:00:53.995676 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-18 04:00:53.995687 | orchestrator | Wednesday 18 February 2026 04:00:48 +0000 (0:00:03.055) 0:01:04.806 **** 2026-02-18 04:00:53.995699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:53.995710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:53.995736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:00:58.454867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:58.455010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:58.455026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:00:58.455040 | orchestrator | 2026-02-18 04:00:58.455053 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-18 04:00:58.455066 | orchestrator | Wednesday 18 February 2026 04:00:53 +0000 (0:00:05.583) 0:01:10.389 **** 2026-02-18 04:00:58.455077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:58.455105 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:00:58.455135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:58.455155 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:00:58.455167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:00:58.455245 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:00:58.455258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:58.455270 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:58.455282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:58.455294 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:58.455313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:00:58.455411 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:58.455426 | orchestrator | 2026-02-18 04:00:58.455439 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-18 04:00:58.455461 | orchestrator | Wednesday 18 February 2026 04:00:55 +0000 (0:00:01.863) 0:01:12.253 **** 2026-02-18 04:00:58.455475 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:00:58.455487 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:00:58.455500 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:00:58.455514 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:00:58.455527 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:00:58.455539 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:00:58.455552 | orchestrator | 2026-02-18 04:00:58.455565 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-18 04:00:58.455590 | orchestrator | Wednesday 18 February 2026 04:00:58 +0000 (0:00:02.589) 0:01:14.842 **** 2026-02-18 04:01:16.739655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:16.739769 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.739787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:16.739799 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:16.739811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:16.739822 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:16.739834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:01:16.739904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:01:16.739919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:01:16.739931 | orchestrator | 2026-02-18 04:01:16.739943 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-18 04:01:16.739955 | orchestrator | Wednesday 18 February 2026 04:01:01 +0000 (0:00:03.293) 0:01:18.136 **** 2026-02-18 04:01:16.739967 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:16.739977 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:16.739988 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:16.739999 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.740028 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:16.740049 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:16.740060 | orchestrator | 2026-02-18 04:01:16.740071 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-18 04:01:16.740082 | orchestrator | Wednesday 18 February 2026 04:01:03 +0000 (0:00:02.053) 0:01:20.190 **** 2026-02-18 04:01:16.740092 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:16.740103 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:16.740114 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:16.740125 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.740135 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:16.740146 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:16.740157 | orchestrator | 2026-02-18 04:01:16.740194 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-18 04:01:16.740207 | orchestrator | Wednesday 18 February 2026 04:01:05 +0000 (0:00:02.135) 0:01:22.325 **** 2026-02-18 04:01:16.740220 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:16.740233 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:16.740245 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:16.740259 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:16.740271 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.740283 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:16.740296 | orchestrator | 2026-02-18 04:01:16.740308 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-18 04:01:16.740330 | orchestrator | Wednesday 18 February 2026 04:01:08 +0000 (0:00:02.180) 0:01:24.505 **** 2026-02-18 04:01:16.740342 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:16.740356 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:16.740369 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:16.740381 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.740393 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:16.740406 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:16.740419 | orchestrator | 2026-02-18 04:01:16.740431 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-18 04:01:16.740444 | orchestrator | Wednesday 18 February 2026 04:01:10 +0000 (0:00:02.187) 0:01:26.693 **** 2026-02-18 04:01:16.740456 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:16.740469 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:16.740481 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:16.740493 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.740505 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:16.740517 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:16.740529 | orchestrator | 2026-02-18 04:01:16.740543 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-18 04:01:16.740554 | orchestrator | Wednesday 18 February 2026 04:01:12 +0000 (0:00:02.288) 0:01:28.982 **** 2026-02-18 04:01:16.740565 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:16.740575 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:16.740586 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:16.740597 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.740613 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:16.740624 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:16.740635 | orchestrator | 2026-02-18 04:01:16.740646 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-18 04:01:16.740657 | orchestrator | Wednesday 18 February 2026 04:01:14 +0000 (0:00:02.008) 0:01:30.990 **** 2026-02-18 04:01:16.740668 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-18 04:01:16.740679 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:16.740690 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-18 04:01:16.740701 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:16.740711 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-18 04:01:16.740722 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:16.740733 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-18 04:01:16.740744 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:16.740762 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-18 04:01:21.034488 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:21.034576 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-18 04:01:21.034587 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:21.034595 | orchestrator | 2026-02-18 04:01:21.034603 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-18 04:01:21.034611 | orchestrator | Wednesday 18 February 2026 04:01:16 +0000 (0:00:02.135) 0:01:33.125 **** 2026-02-18 04:01:21.034622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:21.034651 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:21.034659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:21.034667 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:21.034675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:21.034682 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:21.034702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:21.034712 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:21.034733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:21.034746 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:21.034754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:21.034762 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:21.034769 | orchestrator | 2026-02-18 04:01:21.034776 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-18 04:01:21.034784 | orchestrator | Wednesday 18 February 2026 04:01:18 +0000 (0:00:02.097) 0:01:35.223 **** 2026-02-18 04:01:21.034791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:21.034799 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:21.034810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:21.034818 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:21.034832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:46.721064 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.721276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:46.721308 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.721328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:46.721346 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.721364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:46.721381 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.721396 | orchestrator | 2026-02-18 04:01:46.721414 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-18 04:01:46.721432 | orchestrator | Wednesday 18 February 2026 04:01:21 +0000 (0:00:02.203) 0:01:37.426 **** 2026-02-18 04:01:46.721450 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.721466 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.721482 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.721498 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.721515 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.721533 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.721550 | orchestrator | 2026-02-18 04:01:46.721586 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-18 04:01:46.721604 | orchestrator | Wednesday 18 February 2026 04:01:23 +0000 (0:00:02.320) 0:01:39.746 **** 2026-02-18 04:01:46.721622 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.721639 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.721656 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.721674 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:01:46.721691 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:01:46.721708 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:01:46.721724 | orchestrator | 2026-02-18 04:01:46.721742 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-18 04:01:46.721790 | orchestrator | Wednesday 18 February 2026 04:01:27 +0000 (0:00:03.794) 0:01:43.541 **** 2026-02-18 04:01:46.721808 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.721824 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.721841 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.721857 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.721874 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.721892 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.721908 | orchestrator | 2026-02-18 04:01:46.721926 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-18 04:01:46.721940 | orchestrator | Wednesday 18 February 2026 04:01:29 +0000 (0:00:02.176) 0:01:45.718 **** 2026-02-18 04:01:46.721956 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.721972 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.721988 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.722005 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.722097 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.722119 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.722137 | orchestrator | 2026-02-18 04:01:46.722177 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-18 04:01:46.722221 | orchestrator | Wednesday 18 February 2026 04:01:31 +0000 (0:00:02.066) 0:01:47.784 **** 2026-02-18 04:01:46.722241 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.722259 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.722275 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.722291 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.722308 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.722326 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.722344 | orchestrator | 2026-02-18 04:01:46.722361 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-18 04:01:46.722378 | orchestrator | Wednesday 18 February 2026 04:01:33 +0000 (0:00:02.081) 0:01:49.866 **** 2026-02-18 04:01:46.722394 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.722411 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.722428 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.722446 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.722462 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.722477 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.722493 | orchestrator | 2026-02-18 04:01:46.722510 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-18 04:01:46.722528 | orchestrator | Wednesday 18 February 2026 04:01:35 +0000 (0:00:02.119) 0:01:51.985 **** 2026-02-18 04:01:46.722546 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.722564 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.722583 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.722602 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.722619 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.722636 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.722654 | orchestrator | 2026-02-18 04:01:46.722671 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-18 04:01:46.722687 | orchestrator | Wednesday 18 February 2026 04:01:37 +0000 (0:00:02.294) 0:01:54.280 **** 2026-02-18 04:01:46.722703 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.722718 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.722734 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.722751 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.722767 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.722785 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.722802 | orchestrator | 2026-02-18 04:01:46.722818 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-18 04:01:46.722835 | orchestrator | Wednesday 18 February 2026 04:01:39 +0000 (0:00:02.090) 0:01:56.370 **** 2026-02-18 04:01:46.722853 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.722886 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.722905 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.722922 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.722939 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.722956 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.722973 | orchestrator | 2026-02-18 04:01:46.722991 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-18 04:01:46.723009 | orchestrator | Wednesday 18 February 2026 04:01:42 +0000 (0:00:02.263) 0:01:58.634 **** 2026-02-18 04:01:46.723026 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-18 04:01:46.723046 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-18 04:01:46.723063 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:46.723081 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.723099 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-18 04:01:46.723116 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:46.723133 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-18 04:01:46.723176 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:46.723194 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-18 04:01:46.723213 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:46.723231 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-18 04:01:46.723259 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:46.723278 | orchestrator | 2026-02-18 04:01:46.723296 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-18 04:01:46.723314 | orchestrator | Wednesday 18 February 2026 04:01:44 +0000 (0:00:02.030) 0:02:00.665 **** 2026-02-18 04:01:46.723336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:46.723355 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:01:46.723395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:49.242530 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:01:49.242661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-18 04:01:49.242681 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:01:49.242695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:49.242708 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:01:49.242734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:49.242745 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:01:49.242757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 04:01:49.242768 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:01:49.242779 | orchestrator | 2026-02-18 04:01:49.242790 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-18 04:01:49.242803 | orchestrator | Wednesday 18 February 2026 04:01:46 +0000 (0:00:02.445) 0:02:03.111 **** 2026-02-18 04:01:49.242831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:01:49.242852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:01:49.242870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-18 04:01:49.242882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:01:49.242894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:01:49.242919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-18 04:04:12.325276 | orchestrator | 2026-02-18 04:04:12.325400 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-18 04:04:12.325418 | orchestrator | Wednesday 18 February 2026 04:01:49 +0000 (0:00:02.523) 0:02:05.634 **** 2026-02-18 04:04:12.325431 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:04:12.325443 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:04:12.325454 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:04:12.325465 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:04:12.325476 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:04:12.325487 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:04:12.325498 | orchestrator | 2026-02-18 04:04:12.325509 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-18 04:04:12.325520 | orchestrator | Wednesday 18 February 2026 04:01:49 +0000 (0:00:00.747) 0:02:06.381 **** 2026-02-18 04:04:12.325531 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:04:12.325542 | orchestrator | 2026-02-18 04:04:12.325553 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-18 04:04:12.325564 | orchestrator | Wednesday 18 February 2026 04:01:52 +0000 (0:00:02.264) 0:02:08.646 **** 2026-02-18 04:04:12.325575 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:04:12.325586 | orchestrator | 2026-02-18 04:04:12.325597 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-18 04:04:12.325607 | orchestrator | Wednesday 18 February 2026 04:01:54 +0000 (0:00:02.232) 0:02:10.879 **** 2026-02-18 04:04:12.325618 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:04:12.325629 | orchestrator | 2026-02-18 04:04:12.325640 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-18 04:04:12.325651 | orchestrator | Wednesday 18 February 2026 04:02:37 +0000 (0:00:42.654) 0:02:53.533 **** 2026-02-18 04:04:12.325663 | orchestrator | 2026-02-18 04:04:12.325674 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-18 04:04:12.325685 | orchestrator | Wednesday 18 February 2026 04:02:37 +0000 (0:00:00.068) 0:02:53.602 **** 2026-02-18 04:04:12.325696 | orchestrator | 2026-02-18 04:04:12.325726 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-18 04:04:12.325737 | orchestrator | Wednesday 18 February 2026 04:02:37 +0000 (0:00:00.070) 0:02:53.672 **** 2026-02-18 04:04:12.325748 | orchestrator | 2026-02-18 04:04:12.325831 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-18 04:04:12.325847 | orchestrator | Wednesday 18 February 2026 04:02:37 +0000 (0:00:00.068) 0:02:53.741 **** 2026-02-18 04:04:12.325859 | orchestrator | 2026-02-18 04:04:12.325891 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-18 04:04:12.325904 | orchestrator | Wednesday 18 February 2026 04:02:37 +0000 (0:00:00.070) 0:02:53.812 **** 2026-02-18 04:04:12.325916 | orchestrator | 2026-02-18 04:04:12.325928 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-18 04:04:12.325942 | orchestrator | Wednesday 18 February 2026 04:02:37 +0000 (0:00:00.068) 0:02:53.880 **** 2026-02-18 04:04:12.325954 | orchestrator | 2026-02-18 04:04:12.325967 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-18 04:04:12.325979 | orchestrator | Wednesday 18 February 2026 04:02:37 +0000 (0:00:00.070) 0:02:53.951 **** 2026-02-18 04:04:12.326112 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:04:12.326137 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:04:12.326155 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:04:12.326166 | orchestrator | 2026-02-18 04:04:12.326177 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-18 04:04:12.326188 | orchestrator | Wednesday 18 February 2026 04:03:06 +0000 (0:00:28.634) 0:03:22.585 **** 2026-02-18 04:04:12.326199 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:04:12.326210 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:04:12.326221 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:04:12.326232 | orchestrator | 2026-02-18 04:04:12.326242 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:04:12.326255 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-18 04:04:12.326267 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-18 04:04:12.326279 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-18 04:04:12.326290 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-18 04:04:12.326300 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-18 04:04:12.326311 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-18 04:04:12.326322 | orchestrator | 2026-02-18 04:04:12.326333 | orchestrator | 2026-02-18 04:04:12.326344 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:04:12.326355 | orchestrator | Wednesday 18 February 2026 04:04:11 +0000 (0:01:05.692) 0:04:28.278 **** 2026-02-18 04:04:12.326365 | orchestrator | =============================================================================== 2026-02-18 04:04:12.326376 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 65.69s 2026-02-18 04:04:12.326387 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.65s 2026-02-18 04:04:12.326398 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.63s 2026-02-18 04:04:12.326429 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.23s 2026-02-18 04:04:12.326441 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.07s 2026-02-18 04:04:12.326452 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.58s 2026-02-18 04:04:12.326463 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.42s 2026-02-18 04:04:12.326474 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.99s 2026-02-18 04:04:12.326485 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.80s 2026-02-18 04:04:12.326495 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.41s 2026-02-18 04:04:12.326506 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.30s 2026-02-18 04:04:12.326517 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.29s 2026-02-18 04:04:12.326528 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.13s 2026-02-18 04:04:12.326538 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.06s 2026-02-18 04:04:12.326549 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.81s 2026-02-18 04:04:12.326560 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.59s 2026-02-18 04:04:12.326580 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.57s 2026-02-18 04:04:12.326591 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.52s 2026-02-18 04:04:12.326602 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.45s 2026-02-18 04:04:12.326613 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.43s 2026-02-18 04:04:15.713264 | orchestrator | 2026-02-18 04:04:15 | INFO  | Task fa0c2bd4-c4fd-4d25-8ab1-a21a5afbb9e9 (nova) was prepared for execution. 2026-02-18 04:04:15.713365 | orchestrator | 2026-02-18 04:04:15 | INFO  | It takes a moment until task fa0c2bd4-c4fd-4d25-8ab1-a21a5afbb9e9 (nova) has been started and output is visible here. 2026-02-18 04:06:18.800482 | orchestrator | 2026-02-18 04:06:18.800637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:06:18.800656 | orchestrator | 2026-02-18 04:06:18.800670 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-18 04:06:18.800683 | orchestrator | Wednesday 18 February 2026 04:04:19 +0000 (0:00:00.274) 0:00:00.274 **** 2026-02-18 04:06:18.800696 | orchestrator | changed: [testbed-manager] 2026-02-18 04:06:18.800710 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.800723 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:06:18.800735 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:06:18.800748 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:06:18.800761 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:06:18.800773 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:06:18.800786 | orchestrator | 2026-02-18 04:06:18.800799 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:06:18.800812 | orchestrator | Wednesday 18 February 2026 04:04:20 +0000 (0:00:00.849) 0:00:01.123 **** 2026-02-18 04:06:18.800825 | orchestrator | changed: [testbed-manager] 2026-02-18 04:06:18.800837 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.800850 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:06:18.800863 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:06:18.800876 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:06:18.800889 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:06:18.800902 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:06:18.800915 | orchestrator | 2026-02-18 04:06:18.800928 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:06:18.800941 | orchestrator | Wednesday 18 February 2026 04:04:21 +0000 (0:00:00.841) 0:00:01.965 **** 2026-02-18 04:06:18.800954 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-18 04:06:18.800967 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-18 04:06:18.800980 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-18 04:06:18.801024 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-18 04:06:18.801040 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-18 04:06:18.801053 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-18 04:06:18.801067 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-18 04:06:18.801079 | orchestrator | 2026-02-18 04:06:18.801092 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-18 04:06:18.801105 | orchestrator | 2026-02-18 04:06:18.801118 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-18 04:06:18.801131 | orchestrator | Wednesday 18 February 2026 04:04:22 +0000 (0:00:00.725) 0:00:02.691 **** 2026-02-18 04:06:18.801145 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:06:18.801158 | orchestrator | 2026-02-18 04:06:18.801171 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-18 04:06:18.801184 | orchestrator | Wednesday 18 February 2026 04:04:23 +0000 (0:00:00.742) 0:00:03.433 **** 2026-02-18 04:06:18.801199 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-18 04:06:18.801234 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-18 04:06:18.801247 | orchestrator | 2026-02-18 04:06:18.801260 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-18 04:06:18.801272 | orchestrator | Wednesday 18 February 2026 04:04:27 +0000 (0:00:04.412) 0:00:07.846 **** 2026-02-18 04:06:18.801285 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 04:06:18.801298 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 04:06:18.801310 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.801323 | orchestrator | 2026-02-18 04:06:18.801335 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-18 04:06:18.801348 | orchestrator | Wednesday 18 February 2026 04:04:31 +0000 (0:00:04.317) 0:00:12.164 **** 2026-02-18 04:06:18.801360 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.801373 | orchestrator | 2026-02-18 04:06:18.801386 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-18 04:06:18.801398 | orchestrator | Wednesday 18 February 2026 04:04:32 +0000 (0:00:00.694) 0:00:12.858 **** 2026-02-18 04:06:18.801411 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.801424 | orchestrator | 2026-02-18 04:06:18.801437 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-18 04:06:18.801449 | orchestrator | Wednesday 18 February 2026 04:04:33 +0000 (0:00:01.355) 0:00:14.213 **** 2026-02-18 04:06:18.801462 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.801474 | orchestrator | 2026-02-18 04:06:18.801487 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-18 04:06:18.801500 | orchestrator | Wednesday 18 February 2026 04:04:36 +0000 (0:00:02.595) 0:00:16.809 **** 2026-02-18 04:06:18.801512 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:06:18.801525 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.801537 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.801550 | orchestrator | 2026-02-18 04:06:18.801563 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-18 04:06:18.801575 | orchestrator | Wednesday 18 February 2026 04:04:36 +0000 (0:00:00.295) 0:00:17.105 **** 2026-02-18 04:06:18.801588 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:06:18.801601 | orchestrator | 2026-02-18 04:06:18.801613 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-18 04:06:18.801626 | orchestrator | Wednesday 18 February 2026 04:05:10 +0000 (0:00:33.908) 0:00:51.014 **** 2026-02-18 04:06:18.801639 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.801651 | orchestrator | 2026-02-18 04:06:18.801664 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-18 04:06:18.801676 | orchestrator | Wednesday 18 February 2026 04:05:26 +0000 (0:00:15.550) 0:01:06.564 **** 2026-02-18 04:06:18.801689 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:06:18.801702 | orchestrator | 2026-02-18 04:06:18.801714 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-18 04:06:18.801727 | orchestrator | Wednesday 18 February 2026 04:05:38 +0000 (0:00:12.017) 0:01:18.582 **** 2026-02-18 04:06:18.801761 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:06:18.801775 | orchestrator | 2026-02-18 04:06:18.801795 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-18 04:06:18.801808 | orchestrator | Wednesday 18 February 2026 04:05:38 +0000 (0:00:00.638) 0:01:19.220 **** 2026-02-18 04:06:18.801821 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:06:18.801833 | orchestrator | 2026-02-18 04:06:18.801846 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-18 04:06:18.801858 | orchestrator | Wednesday 18 February 2026 04:05:39 +0000 (0:00:00.462) 0:01:19.683 **** 2026-02-18 04:06:18.801871 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:06:18.801884 | orchestrator | 2026-02-18 04:06:18.801897 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-18 04:06:18.801919 | orchestrator | Wednesday 18 February 2026 04:05:40 +0000 (0:00:00.654) 0:01:20.338 **** 2026-02-18 04:06:18.801931 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:06:18.801944 | orchestrator | 2026-02-18 04:06:18.801956 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-18 04:06:18.801969 | orchestrator | Wednesday 18 February 2026 04:05:59 +0000 (0:00:19.277) 0:01:39.616 **** 2026-02-18 04:06:18.801981 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:06:18.802012 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802096 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802110 | orchestrator | 2026-02-18 04:06:18.802124 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-18 04:06:18.802173 | orchestrator | 2026-02-18 04:06:18.802187 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-18 04:06:18.802199 | orchestrator | Wednesday 18 February 2026 04:05:59 +0000 (0:00:00.307) 0:01:39.924 **** 2026-02-18 04:06:18.802210 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:06:18.802222 | orchestrator | 2026-02-18 04:06:18.802233 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-18 04:06:18.802245 | orchestrator | Wednesday 18 February 2026 04:06:00 +0000 (0:00:00.766) 0:01:40.690 **** 2026-02-18 04:06:18.802256 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802268 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802280 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.802291 | orchestrator | 2026-02-18 04:06:18.802302 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-18 04:06:18.802314 | orchestrator | Wednesday 18 February 2026 04:06:02 +0000 (0:00:02.186) 0:01:42.877 **** 2026-02-18 04:06:18.802325 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802337 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802348 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.802360 | orchestrator | 2026-02-18 04:06:18.802371 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-18 04:06:18.802383 | orchestrator | Wednesday 18 February 2026 04:06:04 +0000 (0:00:02.239) 0:01:45.116 **** 2026-02-18 04:06:18.802394 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:06:18.802406 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802417 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802429 | orchestrator | 2026-02-18 04:06:18.802440 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-18 04:06:18.802451 | orchestrator | Wednesday 18 February 2026 04:06:05 +0000 (0:00:00.488) 0:01:45.605 **** 2026-02-18 04:06:18.802463 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-18 04:06:18.802475 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802486 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-18 04:06:18.802497 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802508 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-18 04:06:18.802520 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-18 04:06:18.802531 | orchestrator | 2026-02-18 04:06:18.802543 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-18 04:06:18.802554 | orchestrator | Wednesday 18 February 2026 04:06:13 +0000 (0:00:08.096) 0:01:53.701 **** 2026-02-18 04:06:18.802566 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:06:18.802577 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802589 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802600 | orchestrator | 2026-02-18 04:06:18.802612 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-18 04:06:18.802623 | orchestrator | Wednesday 18 February 2026 04:06:13 +0000 (0:00:00.326) 0:01:54.028 **** 2026-02-18 04:06:18.802634 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-18 04:06:18.802646 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:06:18.802657 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-18 04:06:18.802678 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802690 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-18 04:06:18.802702 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802713 | orchestrator | 2026-02-18 04:06:18.802724 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-18 04:06:18.802736 | orchestrator | Wednesday 18 February 2026 04:06:14 +0000 (0:00:01.072) 0:01:55.100 **** 2026-02-18 04:06:18.802747 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802758 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802770 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.802781 | orchestrator | 2026-02-18 04:06:18.802792 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-18 04:06:18.802804 | orchestrator | Wednesday 18 February 2026 04:06:15 +0000 (0:00:00.465) 0:01:55.565 **** 2026-02-18 04:06:18.802815 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802827 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802838 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:06:18.802850 | orchestrator | 2026-02-18 04:06:18.802861 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-18 04:06:18.802872 | orchestrator | Wednesday 18 February 2026 04:06:16 +0000 (0:00:01.110) 0:01:56.676 **** 2026-02-18 04:06:18.802884 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:06:18.802895 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:06:18.802931 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:07:38.887705 | orchestrator | 2026-02-18 04:07:38.887810 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-18 04:07:38.887825 | orchestrator | Wednesday 18 February 2026 04:06:18 +0000 (0:00:02.397) 0:01:59.074 **** 2026-02-18 04:07:38.887835 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:38.887845 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:38.887854 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:07:38.887864 | orchestrator | 2026-02-18 04:07:38.887874 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-18 04:07:38.887882 | orchestrator | Wednesday 18 February 2026 04:06:40 +0000 (0:00:22.162) 0:02:21.236 **** 2026-02-18 04:07:38.887891 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:38.887900 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:38.887908 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:07:38.887917 | orchestrator | 2026-02-18 04:07:38.887925 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-18 04:07:38.887934 | orchestrator | Wednesday 18 February 2026 04:06:54 +0000 (0:00:13.074) 0:02:34.310 **** 2026-02-18 04:07:38.887943 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:07:38.887951 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:38.887960 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:38.887968 | orchestrator | 2026-02-18 04:07:38.887977 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-18 04:07:38.887986 | orchestrator | Wednesday 18 February 2026 04:06:55 +0000 (0:00:01.057) 0:02:35.368 **** 2026-02-18 04:07:38.887994 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:38.888050 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:38.888061 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:07:38.888069 | orchestrator | 2026-02-18 04:07:38.888078 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-18 04:07:38.888087 | orchestrator | Wednesday 18 February 2026 04:07:06 +0000 (0:00:11.830) 0:02:47.199 **** 2026-02-18 04:07:38.888096 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:38.888104 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:38.888112 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:38.888121 | orchestrator | 2026-02-18 04:07:38.888130 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-18 04:07:38.888138 | orchestrator | Wednesday 18 February 2026 04:07:07 +0000 (0:00:01.049) 0:02:48.248 **** 2026-02-18 04:07:38.888167 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:38.888177 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:38.888185 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:38.888194 | orchestrator | 2026-02-18 04:07:38.888202 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-18 04:07:38.888211 | orchestrator | 2026-02-18 04:07:38.888219 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-18 04:07:38.888228 | orchestrator | Wednesday 18 February 2026 04:07:08 +0000 (0:00:00.327) 0:02:48.576 **** 2026-02-18 04:07:38.888277 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:07:38.888289 | orchestrator | 2026-02-18 04:07:38.888300 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-18 04:07:38.888310 | orchestrator | Wednesday 18 February 2026 04:07:09 +0000 (0:00:00.746) 0:02:49.322 **** 2026-02-18 04:07:38.888320 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-18 04:07:38.888330 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-18 04:07:38.888341 | orchestrator | 2026-02-18 04:07:38.888352 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-18 04:07:38.888363 | orchestrator | Wednesday 18 February 2026 04:07:12 +0000 (0:00:03.436) 0:02:52.758 **** 2026-02-18 04:07:38.888373 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-18 04:07:38.888384 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-18 04:07:38.888393 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-18 04:07:38.888401 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-18 04:07:38.888411 | orchestrator | 2026-02-18 04:07:38.888419 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-18 04:07:38.888428 | orchestrator | Wednesday 18 February 2026 04:07:19 +0000 (0:00:06.653) 0:02:59.412 **** 2026-02-18 04:07:38.888437 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:07:38.888445 | orchestrator | 2026-02-18 04:07:38.888454 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-18 04:07:38.888462 | orchestrator | Wednesday 18 February 2026 04:07:22 +0000 (0:00:03.159) 0:03:02.572 **** 2026-02-18 04:07:38.888471 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:07:38.888479 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-18 04:07:38.888488 | orchestrator | 2026-02-18 04:07:38.888497 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-18 04:07:38.888505 | orchestrator | Wednesday 18 February 2026 04:07:26 +0000 (0:00:04.002) 0:03:06.574 **** 2026-02-18 04:07:38.888514 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:07:38.888522 | orchestrator | 2026-02-18 04:07:38.888531 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-18 04:07:38.888539 | orchestrator | Wednesday 18 February 2026 04:07:29 +0000 (0:00:03.352) 0:03:09.927 **** 2026-02-18 04:07:38.888548 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-18 04:07:38.888556 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-18 04:07:38.888565 | orchestrator | 2026-02-18 04:07:38.888574 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-18 04:07:38.888602 | orchestrator | Wednesday 18 February 2026 04:07:37 +0000 (0:00:07.935) 0:03:17.862 **** 2026-02-18 04:07:38.888617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:38.888640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:38.888651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:38.888673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:43.385404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:43.385530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:43.385555 | orchestrator | 2026-02-18 04:07:43.385566 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-18 04:07:43.385577 | orchestrator | Wednesday 18 February 2026 04:07:38 +0000 (0:00:01.304) 0:03:19.167 **** 2026-02-18 04:07:43.385587 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:43.385596 | orchestrator | 2026-02-18 04:07:43.385605 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-18 04:07:43.385614 | orchestrator | Wednesday 18 February 2026 04:07:39 +0000 (0:00:00.131) 0:03:19.298 **** 2026-02-18 04:07:43.385622 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:43.385631 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:43.385640 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:43.385648 | orchestrator | 2026-02-18 04:07:43.385657 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-18 04:07:43.385666 | orchestrator | Wednesday 18 February 2026 04:07:39 +0000 (0:00:00.311) 0:03:19.610 **** 2026-02-18 04:07:43.385674 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:07:43.385683 | orchestrator | 2026-02-18 04:07:43.385692 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-18 04:07:43.385700 | orchestrator | Wednesday 18 February 2026 04:07:39 +0000 (0:00:00.670) 0:03:20.281 **** 2026-02-18 04:07:43.385709 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:43.385718 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:43.385727 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:43.385735 | orchestrator | 2026-02-18 04:07:43.385744 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-18 04:07:43.385753 | orchestrator | Wednesday 18 February 2026 04:07:40 +0000 (0:00:00.503) 0:03:20.784 **** 2026-02-18 04:07:43.385762 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:07:43.385771 | orchestrator | 2026-02-18 04:07:43.385780 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-18 04:07:43.385789 | orchestrator | Wednesday 18 February 2026 04:07:41 +0000 (0:00:00.635) 0:03:21.419 **** 2026-02-18 04:07:43.385817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:43.385863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:43.385876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:43.385886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:43.385895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:43.385914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:43.385924 | orchestrator | 2026-02-18 04:07:43.385938 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-18 04:07:44.996705 | orchestrator | Wednesday 18 February 2026 04:07:43 +0000 (0:00:02.248) 0:03:23.668 **** 2026-02-18 04:07:44.996827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:44.996851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:44.996865 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:44.996879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:44.996917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:44.996943 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:44.996976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:44.996990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:44.997001 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:44.997012 | orchestrator | 2026-02-18 04:07:44.997024 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-18 04:07:44.997092 | orchestrator | Wednesday 18 February 2026 04:07:44 +0000 (0:00:00.823) 0:03:24.491 **** 2026-02-18 04:07:44.997106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:44.997128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:44.997139 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:44.997167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:47.254675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:47.254767 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:47.254783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:47.254814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:47.254824 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:47.254834 | orchestrator | 2026-02-18 04:07:47.254844 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-18 04:07:47.254854 | orchestrator | Wednesday 18 February 2026 04:07:44 +0000 (0:00:00.789) 0:03:25.281 **** 2026-02-18 04:07:47.254876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:47.254903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:47.254914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:47.254930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:47.254944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:47.254960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:53.652378 | orchestrator | 2026-02-18 04:07:53.652489 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-18 04:07:53.652505 | orchestrator | Wednesday 18 February 2026 04:07:47 +0000 (0:00:02.260) 0:03:27.542 **** 2026-02-18 04:07:53.652521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:53.652557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:53.652611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:53.652641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:53.652654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:53.652672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:53.652682 | orchestrator | 2026-02-18 04:07:53.652692 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-18 04:07:53.652702 | orchestrator | Wednesday 18 February 2026 04:07:52 +0000 (0:00:05.705) 0:03:33.247 **** 2026-02-18 04:07:53.652717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:53.652728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:53.652739 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:53.652760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:58.021210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:58.021330 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:58.021349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-18 04:07:58.021389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:07:58.021410 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:58.021429 | orchestrator | 2026-02-18 04:07:58.021448 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-18 04:07:58.021469 | orchestrator | Wednesday 18 February 2026 04:07:53 +0000 (0:00:00.691) 0:03:33.938 **** 2026-02-18 04:07:58.021490 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:07:58.021508 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:07:58.021524 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:07:58.021535 | orchestrator | 2026-02-18 04:07:58.021546 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-18 04:07:58.021557 | orchestrator | Wednesday 18 February 2026 04:07:55 +0000 (0:00:01.505) 0:03:35.444 **** 2026-02-18 04:07:58.021568 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:07:58.021579 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:07:58.021589 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:07:58.021600 | orchestrator | 2026-02-18 04:07:58.021611 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-18 04:07:58.021622 | orchestrator | Wednesday 18 February 2026 04:07:55 +0000 (0:00:00.356) 0:03:35.801 **** 2026-02-18 04:07:58.021663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:58.021715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:58.021743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-18 04:07:58.021764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:58.021794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:07:58.021827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:37.874215 | orchestrator | 2026-02-18 04:08:37.874418 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-18 04:08:37.874437 | orchestrator | Wednesday 18 February 2026 04:07:57 +0000 (0:00:02.079) 0:03:37.881 **** 2026-02-18 04:08:37.874447 | orchestrator | 2026-02-18 04:08:37.874457 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-18 04:08:37.874467 | orchestrator | Wednesday 18 February 2026 04:07:57 +0000 (0:00:00.141) 0:03:38.023 **** 2026-02-18 04:08:37.874477 | orchestrator | 2026-02-18 04:08:37.874487 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-18 04:08:37.874496 | orchestrator | Wednesday 18 February 2026 04:07:57 +0000 (0:00:00.138) 0:03:38.161 **** 2026-02-18 04:08:37.874506 | orchestrator | 2026-02-18 04:08:37.874515 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-18 04:08:37.874525 | orchestrator | Wednesday 18 February 2026 04:07:58 +0000 (0:00:00.140) 0:03:38.301 **** 2026-02-18 04:08:37.874535 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:08:37.874546 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:08:37.874555 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:08:37.874565 | orchestrator | 2026-02-18 04:08:37.874574 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-18 04:08:37.874584 | orchestrator | Wednesday 18 February 2026 04:08:16 +0000 (0:00:18.510) 0:03:56.811 **** 2026-02-18 04:08:37.874593 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:08:37.874603 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:08:37.874612 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:08:37.874622 | orchestrator | 2026-02-18 04:08:37.874631 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-18 04:08:37.874641 | orchestrator | 2026-02-18 04:08:37.874650 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-18 04:08:37.874660 | orchestrator | Wednesday 18 February 2026 04:08:26 +0000 (0:00:10.109) 0:04:06.920 **** 2026-02-18 04:08:37.874670 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:08:37.874681 | orchestrator | 2026-02-18 04:08:37.874690 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-18 04:08:37.874715 | orchestrator | Wednesday 18 February 2026 04:08:27 +0000 (0:00:01.198) 0:04:08.119 **** 2026-02-18 04:08:37.874725 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:08:37.874735 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:08:37.874744 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:08:37.874777 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:37.874787 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:37.874797 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:37.874806 | orchestrator | 2026-02-18 04:08:37.874816 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-18 04:08:37.874826 | orchestrator | Wednesday 18 February 2026 04:08:28 +0000 (0:00:00.742) 0:04:08.861 **** 2026-02-18 04:08:37.874836 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:37.874845 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:37.874854 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:37.874864 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:08:37.874874 | orchestrator | 2026-02-18 04:08:37.874884 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-18 04:08:37.874894 | orchestrator | Wednesday 18 February 2026 04:08:29 +0000 (0:00:00.826) 0:04:09.688 **** 2026-02-18 04:08:37.874904 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-18 04:08:37.874914 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-18 04:08:37.874923 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-18 04:08:37.874932 | orchestrator | 2026-02-18 04:08:37.874942 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-18 04:08:37.874951 | orchestrator | Wednesday 18 February 2026 04:08:30 +0000 (0:00:00.847) 0:04:10.536 **** 2026-02-18 04:08:37.874961 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-18 04:08:37.874970 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-18 04:08:37.874980 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-18 04:08:37.874989 | orchestrator | 2026-02-18 04:08:37.874999 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-18 04:08:37.875008 | orchestrator | Wednesday 18 February 2026 04:08:31 +0000 (0:00:01.235) 0:04:11.771 **** 2026-02-18 04:08:37.875018 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-18 04:08:37.875027 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:08:37.875036 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-18 04:08:37.875046 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:08:37.875055 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-18 04:08:37.875065 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:08:37.875074 | orchestrator | 2026-02-18 04:08:37.875084 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-18 04:08:37.875094 | orchestrator | Wednesday 18 February 2026 04:08:32 +0000 (0:00:00.589) 0:04:12.361 **** 2026-02-18 04:08:37.875103 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-18 04:08:37.875113 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-18 04:08:37.875122 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 04:08:37.875132 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 04:08:37.875141 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:37.875151 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 04:08:37.875161 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 04:08:37.875170 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:37.875197 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-18 04:08:37.875208 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 04:08:37.875217 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-18 04:08:37.875227 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 04:08:37.875236 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:37.875253 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-18 04:08:37.875262 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-18 04:08:37.875272 | orchestrator | 2026-02-18 04:08:37.875282 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-18 04:08:37.875291 | orchestrator | Wednesday 18 February 2026 04:08:33 +0000 (0:00:01.174) 0:04:13.536 **** 2026-02-18 04:08:37.875325 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:37.875336 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:37.875345 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:37.875355 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:08:37.875364 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:08:37.875374 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:08:37.875383 | orchestrator | 2026-02-18 04:08:37.875393 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-18 04:08:37.875403 | orchestrator | Wednesday 18 February 2026 04:08:34 +0000 (0:00:01.067) 0:04:14.603 **** 2026-02-18 04:08:37.875412 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:37.875422 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:37.875431 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:37.875440 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:08:37.875450 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:08:37.875460 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:08:37.875469 | orchestrator | 2026-02-18 04:08:37.875479 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-18 04:08:37.875488 | orchestrator | Wednesday 18 February 2026 04:08:36 +0000 (0:00:01.697) 0:04:16.300 **** 2026-02-18 04:08:37.875506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:08:37.875523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:08:37.875542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:39.575997 | orchestrator | 2026-02-18 04:08:39.576008 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-18 04:08:39.576018 | orchestrator | Wednesday 18 February 2026 04:08:38 +0000 (0:00:02.310) 0:04:18.611 **** 2026-02-18 04:08:39.576027 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:08:39.576037 | orchestrator | 2026-02-18 04:08:39.576046 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-18 04:08:39.576060 | orchestrator | Wednesday 18 February 2026 04:08:39 +0000 (0:00:01.246) 0:04:19.857 **** 2026-02-18 04:08:42.974535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:42.974855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:44.496122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:44.496264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:08:44.496285 | orchestrator | 2026-02-18 04:08:44.496299 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-18 04:08:44.496312 | orchestrator | Wednesday 18 February 2026 04:08:43 +0000 (0:00:03.576) 0:04:23.434 **** 2026-02-18 04:08:44.496325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:08:44.496426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:08:44.496449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:08:44.496469 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:08:44.496526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:08:44.496549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:08:44.496566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:08:44.496587 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:08:44.496599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:08:44.496610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:08:44.496632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:08:46.978324 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:08:46.978512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:08:46.978534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:08:46.978568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:08:46.978580 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:46.978592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:08:46.978603 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:46.978615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:08:46.978626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:08:46.978637 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:46.978649 | orchestrator | 2026-02-18 04:08:46.978661 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-18 04:08:46.978673 | orchestrator | Wednesday 18 February 2026 04:08:44 +0000 (0:00:01.604) 0:04:25.039 **** 2026-02-18 04:08:46.978709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:08:46.978731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:08:46.978745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:08:46.978757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:08:46.978769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:08:46.978789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:08:53.661420 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:08:53.661492 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:08:53.661515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:08:53.661522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:08:53.661528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:08:53.661533 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:08:53.661538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:08:53.661543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:08:53.661561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:08:53.661569 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:53.661573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:08:53.661577 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:53.661580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:08:53.661584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:08:53.661588 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:53.661592 | orchestrator | 2026-02-18 04:08:53.661596 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-18 04:08:53.661602 | orchestrator | Wednesday 18 February 2026 04:08:46 +0000 (0:00:02.225) 0:04:27.264 **** 2026-02-18 04:08:53.661605 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:08:53.661609 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:08:53.661613 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:08:53.661617 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:08:53.661621 | orchestrator | 2026-02-18 04:08:53.661625 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-18 04:08:53.661628 | orchestrator | Wednesday 18 February 2026 04:08:47 +0000 (0:00:00.904) 0:04:28.169 **** 2026-02-18 04:08:53.661632 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:08:53.661636 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:08:53.661640 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:08:53.661644 | orchestrator | 2026-02-18 04:08:53.661648 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-18 04:08:53.661651 | orchestrator | Wednesday 18 February 2026 04:08:48 +0000 (0:00:01.064) 0:04:29.233 **** 2026-02-18 04:08:53.661655 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:08:53.661659 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:08:53.661662 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:08:53.661666 | orchestrator | 2026-02-18 04:08:53.661670 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-18 04:08:53.661673 | orchestrator | Wednesday 18 February 2026 04:08:49 +0000 (0:00:00.914) 0:04:30.148 **** 2026-02-18 04:08:53.661680 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:08:53.661685 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:08:53.661688 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:08:53.661692 | orchestrator | 2026-02-18 04:08:53.661696 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-18 04:08:53.661699 | orchestrator | Wednesday 18 February 2026 04:08:50 +0000 (0:00:00.562) 0:04:30.710 **** 2026-02-18 04:08:53.661703 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:08:53.661707 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:08:53.661710 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:08:53.661714 | orchestrator | 2026-02-18 04:08:53.661718 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-18 04:08:53.661722 | orchestrator | Wednesday 18 February 2026 04:08:50 +0000 (0:00:00.528) 0:04:31.238 **** 2026-02-18 04:08:53.661725 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-18 04:08:53.661729 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-18 04:08:53.661733 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-18 04:08:53.661736 | orchestrator | 2026-02-18 04:08:53.661740 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-18 04:08:53.661744 | orchestrator | Wednesday 18 February 2026 04:08:52 +0000 (0:00:01.428) 0:04:32.667 **** 2026-02-18 04:08:53.661752 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-18 04:09:11.706763 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-18 04:09:11.706889 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-18 04:09:11.706907 | orchestrator | 2026-02-18 04:09:11.706919 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-18 04:09:11.706933 | orchestrator | Wednesday 18 February 2026 04:08:53 +0000 (0:00:01.277) 0:04:33.945 **** 2026-02-18 04:09:11.706952 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-18 04:09:11.706970 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-18 04:09:11.706989 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-18 04:09:11.707006 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-18 04:09:11.707025 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-18 04:09:11.707044 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-18 04:09:11.707063 | orchestrator | 2026-02-18 04:09:11.707082 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-18 04:09:11.707100 | orchestrator | Wednesday 18 February 2026 04:08:57 +0000 (0:00:03.733) 0:04:37.679 **** 2026-02-18 04:09:11.707120 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:11.707141 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:11.707154 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:11.707166 | orchestrator | 2026-02-18 04:09:11.707176 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-18 04:09:11.707188 | orchestrator | Wednesday 18 February 2026 04:08:57 +0000 (0:00:00.312) 0:04:37.992 **** 2026-02-18 04:09:11.707199 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:11.707209 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:11.707220 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:11.707231 | orchestrator | 2026-02-18 04:09:11.707243 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-18 04:09:11.707254 | orchestrator | Wednesday 18 February 2026 04:08:58 +0000 (0:00:00.514) 0:04:38.506 **** 2026-02-18 04:09:11.707265 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:09:11.707276 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:09:11.707287 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:09:11.707298 | orchestrator | 2026-02-18 04:09:11.707309 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-18 04:09:11.707320 | orchestrator | Wednesday 18 February 2026 04:08:59 +0000 (0:00:01.217) 0:04:39.724 **** 2026-02-18 04:09:11.707331 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-18 04:09:11.707380 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-18 04:09:11.707398 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-18 04:09:11.707416 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-18 04:09:11.707434 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-18 04:09:11.707478 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-18 04:09:11.707498 | orchestrator | 2026-02-18 04:09:11.707510 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-18 04:09:11.707521 | orchestrator | Wednesday 18 February 2026 04:09:02 +0000 (0:00:03.291) 0:04:43.015 **** 2026-02-18 04:09:11.707532 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-18 04:09:11.707543 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-18 04:09:11.707553 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-18 04:09:11.707564 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-18 04:09:11.707575 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:09:11.707585 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-18 04:09:11.707596 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:09:11.707607 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-18 04:09:11.707617 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:09:11.707628 | orchestrator | 2026-02-18 04:09:11.707639 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-18 04:09:11.707649 | orchestrator | Wednesday 18 February 2026 04:09:06 +0000 (0:00:03.332) 0:04:46.348 **** 2026-02-18 04:09:11.707660 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:11.707671 | orchestrator | 2026-02-18 04:09:11.707682 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-18 04:09:11.707693 | orchestrator | Wednesday 18 February 2026 04:09:06 +0000 (0:00:00.139) 0:04:46.488 **** 2026-02-18 04:09:11.707704 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:11.707715 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:11.707725 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:11.707736 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:11.707747 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:11.707757 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:11.707768 | orchestrator | 2026-02-18 04:09:11.707779 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-18 04:09:11.707790 | orchestrator | Wednesday 18 February 2026 04:09:07 +0000 (0:00:00.827) 0:04:47.315 **** 2026-02-18 04:09:11.707800 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:09:11.707811 | orchestrator | 2026-02-18 04:09:11.707822 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-18 04:09:11.707833 | orchestrator | Wednesday 18 February 2026 04:09:07 +0000 (0:00:00.697) 0:04:48.013 **** 2026-02-18 04:09:11.707858 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:11.707889 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:11.707901 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:11.707912 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:11.707923 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:11.707933 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:11.707944 | orchestrator | 2026-02-18 04:09:11.707955 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-18 04:09:11.707966 | orchestrator | Wednesday 18 February 2026 04:09:08 +0000 (0:00:00.781) 0:04:48.795 **** 2026-02-18 04:09:11.707990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:09:11.708007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:09:11.708019 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:09:11.708032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:11.708059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:16.314800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:16.315807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:09:16.315864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:09:16.315883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:09:16.315901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:16.315921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:16.315991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:16.316050 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:16.316073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:16.316092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:16.316111 | orchestrator | 2026-02-18 04:09:16.316132 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-18 04:09:16.316154 | orchestrator | Wednesday 18 February 2026 04:09:11 +0000 (0:00:03.452) 0:04:52.248 **** 2026-02-18 04:09:16.316173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:09:16.316201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:09:16.316246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:09:18.425725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:09:18.425832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:09:18.425849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:09:18.425861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:18.425906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:18.425936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:18.425948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:18.425959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:18.425970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:18.425981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:18.426002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:18.426013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:09:18.426080 | orchestrator | 2026-02-18 04:09:18.426092 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-18 04:09:18.426111 | orchestrator | Wednesday 18 February 2026 04:09:18 +0000 (0:00:06.455) 0:04:58.703 **** 2026-02-18 04:09:40.266701 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:40.266800 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:40.266811 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:40.266829 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.266837 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.266845 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.266852 | orchestrator | 2026-02-18 04:09:40.266869 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-18 04:09:40.266878 | orchestrator | Wednesday 18 February 2026 04:09:19 +0000 (0:00:01.507) 0:05:00.210 **** 2026-02-18 04:09:40.266886 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-18 04:09:40.266893 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-18 04:09:40.266901 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-18 04:09:40.266908 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-18 04:09:40.266915 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-18 04:09:40.266923 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-18 04:09:40.266931 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.266938 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-18 04:09:40.266945 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-18 04:09:40.266952 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.266960 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-18 04:09:40.266967 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.266974 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-18 04:09:40.266982 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-18 04:09:40.267008 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-18 04:09:40.267015 | orchestrator | 2026-02-18 04:09:40.267023 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-18 04:09:40.267031 | orchestrator | Wednesday 18 February 2026 04:09:23 +0000 (0:00:03.952) 0:05:04.163 **** 2026-02-18 04:09:40.267038 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:40.267045 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:40.267053 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:40.267060 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.267067 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.267074 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.267081 | orchestrator | 2026-02-18 04:09:40.267088 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-18 04:09:40.267095 | orchestrator | Wednesday 18 February 2026 04:09:24 +0000 (0:00:00.677) 0:05:04.841 **** 2026-02-18 04:09:40.267103 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-18 04:09:40.267111 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-18 04:09:40.267118 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-18 04:09:40.267125 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-18 04:09:40.267132 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-18 04:09:40.267139 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-18 04:09:40.267158 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-18 04:09:40.267165 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-18 04:09:40.267172 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-18 04:09:40.267179 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-18 04:09:40.267187 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.267194 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-18 04:09:40.267201 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.267208 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-18 04:09:40.267215 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.267222 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-18 04:09:40.267229 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-18 04:09:40.267251 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-18 04:09:40.267258 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-18 04:09:40.267266 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-18 04:09:40.267273 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-18 04:09:40.267280 | orchestrator | 2026-02-18 04:09:40.267287 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-18 04:09:40.267294 | orchestrator | Wednesday 18 February 2026 04:09:29 +0000 (0:00:05.169) 0:05:10.010 **** 2026-02-18 04:09:40.267308 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 04:09:40.267315 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 04:09:40.267322 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 04:09:40.267329 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-18 04:09:40.267336 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-18 04:09:40.267343 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-18 04:09:40.267350 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-18 04:09:40.267357 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-18 04:09:40.267364 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-18 04:09:40.267372 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 04:09:40.267378 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 04:09:40.267385 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 04:09:40.267392 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-18 04:09:40.267399 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-18 04:09:40.267406 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.267414 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-18 04:09:40.267421 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.267428 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-18 04:09:40.267435 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.267442 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-18 04:09:40.267449 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-18 04:09:40.267457 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-18 04:09:40.267464 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-18 04:09:40.267471 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-18 04:09:40.267478 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-18 04:09:40.267485 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-18 04:09:40.267492 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-18 04:09:40.267499 | orchestrator | 2026-02-18 04:09:40.267510 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-18 04:09:40.267517 | orchestrator | Wednesday 18 February 2026 04:09:36 +0000 (0:00:06.867) 0:05:16.878 **** 2026-02-18 04:09:40.267524 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:40.267531 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:40.267538 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:40.267545 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.267552 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.267559 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.267566 | orchestrator | 2026-02-18 04:09:40.267573 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-18 04:09:40.267580 | orchestrator | Wednesday 18 February 2026 04:09:37 +0000 (0:00:00.785) 0:05:17.664 **** 2026-02-18 04:09:40.267618 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:40.267631 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:40.267638 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:40.267646 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.267652 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.267659 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.267666 | orchestrator | 2026-02-18 04:09:40.267674 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-18 04:09:40.267681 | orchestrator | Wednesday 18 February 2026 04:09:38 +0000 (0:00:00.632) 0:05:18.296 **** 2026-02-18 04:09:40.267692 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:40.267705 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:40.267717 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:09:40.267733 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:40.267745 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:09:40.267757 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:09:40.267770 | orchestrator | 2026-02-18 04:09:40.267790 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-18 04:09:41.253100 | orchestrator | Wednesday 18 February 2026 04:09:40 +0000 (0:00:02.233) 0:05:20.530 **** 2026-02-18 04:09:41.253185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:09:41.253200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:09:41.253211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:09:41.253220 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:41.253245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:09:41.253273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:09:41.253296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:09:41.253305 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:41.253314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-18 04:09:41.253322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-18 04:09:41.253330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-18 04:09:41.253348 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:41.253358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:09:41.253373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:09:45.183066 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:45.183201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:09:45.183233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:09:45.183254 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:45.183275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-18 04:09:45.183296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:09:45.183349 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:45.183371 | orchestrator | 2026-02-18 04:09:45.183390 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-18 04:09:45.183410 | orchestrator | Wednesday 18 February 2026 04:09:41 +0000 (0:00:01.433) 0:05:21.963 **** 2026-02-18 04:09:45.183429 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-18 04:09:45.183447 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-18 04:09:45.183481 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:09:45.183499 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-18 04:09:45.183517 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-18 04:09:45.183535 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:09:45.183553 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-18 04:09:45.183572 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-18 04:09:45.183590 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:09:45.183645 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-18 04:09:45.183666 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-18 04:09:45.183685 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:09:45.183704 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-18 04:09:45.183722 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-18 04:09:45.183741 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:09:45.183752 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-18 04:09:45.183763 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-18 04:09:45.183773 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:09:45.183784 | orchestrator | 2026-02-18 04:09:45.183795 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-18 04:09:45.183806 | orchestrator | Wednesday 18 February 2026 04:09:42 +0000 (0:00:01.064) 0:05:23.028 **** 2026-02-18 04:09:45.183841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:09:45.183856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:09:45.183880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-18 04:09:45.183900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:45.183912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:09:45.183933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:10:34.852727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-18 04:10:34.852850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:10:34.852992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-18 04:10:34.853010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:10:34.853037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:10:34.853051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:10:34.853084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:10:34.853097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:10:34.853118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-18 04:10:34.853130 | orchestrator | 2026-02-18 04:10:34.853143 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-18 04:10:34.853156 | orchestrator | Wednesday 18 February 2026 04:09:45 +0000 (0:00:02.777) 0:05:25.806 **** 2026-02-18 04:10:34.853168 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:10:34.853180 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:10:34.853191 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:10:34.853201 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:10:34.853212 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:10:34.853223 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:10:34.853234 | orchestrator | 2026-02-18 04:10:34.853245 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-18 04:10:34.853259 | orchestrator | Wednesday 18 February 2026 04:09:46 +0000 (0:00:00.780) 0:05:26.586 **** 2026-02-18 04:10:34.853271 | orchestrator | 2026-02-18 04:10:34.853284 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-18 04:10:34.853297 | orchestrator | Wednesday 18 February 2026 04:09:46 +0000 (0:00:00.152) 0:05:26.739 **** 2026-02-18 04:10:34.853309 | orchestrator | 2026-02-18 04:10:34.853322 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-18 04:10:34.853340 | orchestrator | Wednesday 18 February 2026 04:09:46 +0000 (0:00:00.136) 0:05:26.875 **** 2026-02-18 04:10:34.853352 | orchestrator | 2026-02-18 04:10:34.853365 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-18 04:10:34.853377 | orchestrator | Wednesday 18 February 2026 04:09:46 +0000 (0:00:00.137) 0:05:27.012 **** 2026-02-18 04:10:34.853390 | orchestrator | 2026-02-18 04:10:34.853401 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-18 04:10:34.853415 | orchestrator | Wednesday 18 February 2026 04:09:46 +0000 (0:00:00.135) 0:05:27.148 **** 2026-02-18 04:10:34.853426 | orchestrator | 2026-02-18 04:10:34.853438 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-18 04:10:34.853450 | orchestrator | Wednesday 18 February 2026 04:09:47 +0000 (0:00:00.298) 0:05:27.446 **** 2026-02-18 04:10:34.853462 | orchestrator | 2026-02-18 04:10:34.853474 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-18 04:10:34.853487 | orchestrator | Wednesday 18 February 2026 04:09:47 +0000 (0:00:00.150) 0:05:27.596 **** 2026-02-18 04:10:34.853499 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:10:34.853510 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:10:34.853520 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:10:34.853531 | orchestrator | 2026-02-18 04:10:34.853542 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-18 04:10:34.853553 | orchestrator | Wednesday 18 February 2026 04:09:54 +0000 (0:00:06.847) 0:05:34.444 **** 2026-02-18 04:10:34.853563 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:10:34.853574 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:10:34.853585 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:10:34.853595 | orchestrator | 2026-02-18 04:10:34.853606 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-18 04:10:34.853624 | orchestrator | Wednesday 18 February 2026 04:10:12 +0000 (0:00:18.771) 0:05:53.216 **** 2026-02-18 04:10:34.853635 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:10:34.853646 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:10:34.853657 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:10:34.853667 | orchestrator | 2026-02-18 04:10:34.853685 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-18 04:12:59.876478 | orchestrator | Wednesday 18 February 2026 04:10:34 +0000 (0:00:21.909) 0:06:15.126 **** 2026-02-18 04:12:59.876681 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:12:59.876712 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:12:59.876732 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:12:59.876752 | orchestrator | 2026-02-18 04:12:59.876773 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-18 04:12:59.876792 | orchestrator | Wednesday 18 February 2026 04:11:19 +0000 (0:00:44.184) 0:06:59.310 **** 2026-02-18 04:12:59.876812 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:12:59.876830 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:12:59.876848 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:12:59.876859 | orchestrator | 2026-02-18 04:12:59.876870 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-18 04:12:59.876881 | orchestrator | Wednesday 18 February 2026 04:11:19 +0000 (0:00:00.784) 0:07:00.094 **** 2026-02-18 04:12:59.876892 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:12:59.876903 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:12:59.876914 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:12:59.876927 | orchestrator | 2026-02-18 04:12:59.876940 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-18 04:12:59.876953 | orchestrator | Wednesday 18 February 2026 04:11:20 +0000 (0:00:00.814) 0:07:00.908 **** 2026-02-18 04:12:59.876965 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:12:59.876977 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:12:59.876990 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:12:59.877003 | orchestrator | 2026-02-18 04:12:59.877016 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-18 04:12:59.877029 | orchestrator | Wednesday 18 February 2026 04:11:51 +0000 (0:00:30.609) 0:07:31.518 **** 2026-02-18 04:12:59.877041 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:12:59.877054 | orchestrator | 2026-02-18 04:12:59.877068 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-18 04:12:59.877088 | orchestrator | Wednesday 18 February 2026 04:11:51 +0000 (0:00:00.140) 0:07:31.658 **** 2026-02-18 04:12:59.877107 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:12:59.877126 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:12:59.877145 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:12:59.877164 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:12:59.877184 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:12:59.877205 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-18 04:12:59.877225 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 04:12:59.877244 | orchestrator | 2026-02-18 04:12:59.877262 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-18 04:12:59.877281 | orchestrator | Wednesday 18 February 2026 04:12:13 +0000 (0:00:22.471) 0:07:54.130 **** 2026-02-18 04:12:59.877301 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:12:59.877314 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:12:59.877325 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:12:59.877335 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:12:59.877346 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:12:59.877356 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:12:59.877367 | orchestrator | 2026-02-18 04:12:59.877378 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-18 04:12:59.877415 | orchestrator | Wednesday 18 February 2026 04:12:21 +0000 (0:00:08.144) 0:08:02.274 **** 2026-02-18 04:12:59.877427 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:12:59.877437 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:12:59.877448 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:12:59.877459 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:12:59.877469 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:12:59.877481 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-18 04:12:59.877492 | orchestrator | 2026-02-18 04:12:59.877518 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-18 04:12:59.877529 | orchestrator | Wednesday 18 February 2026 04:12:25 +0000 (0:00:03.891) 0:08:06.166 **** 2026-02-18 04:12:59.877539 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 04:12:59.877550 | orchestrator | 2026-02-18 04:12:59.877560 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-18 04:12:59.877571 | orchestrator | Wednesday 18 February 2026 04:12:39 +0000 (0:00:13.632) 0:08:19.798 **** 2026-02-18 04:12:59.877581 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 04:12:59.877592 | orchestrator | 2026-02-18 04:12:59.877602 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-18 04:12:59.877656 | orchestrator | Wednesday 18 February 2026 04:12:41 +0000 (0:00:01.542) 0:08:21.341 **** 2026-02-18 04:12:59.877667 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:12:59.877677 | orchestrator | 2026-02-18 04:12:59.877688 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-18 04:12:59.877699 | orchestrator | Wednesday 18 February 2026 04:12:42 +0000 (0:00:01.734) 0:08:23.075 **** 2026-02-18 04:12:59.877709 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 04:12:59.877720 | orchestrator | 2026-02-18 04:12:59.877731 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-18 04:12:59.877741 | orchestrator | Wednesday 18 February 2026 04:12:54 +0000 (0:00:11.772) 0:08:34.848 **** 2026-02-18 04:12:59.877752 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:12:59.877764 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:12:59.877774 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:12:59.877785 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:12:59.877795 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:12:59.877806 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:12:59.877816 | orchestrator | 2026-02-18 04:12:59.877827 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-18 04:12:59.877838 | orchestrator | 2026-02-18 04:12:59.877858 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-18 04:12:59.877888 | orchestrator | Wednesday 18 February 2026 04:12:56 +0000 (0:00:01.752) 0:08:36.600 **** 2026-02-18 04:12:59.877900 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:12:59.877911 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:12:59.877921 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:12:59.877932 | orchestrator | 2026-02-18 04:12:59.877943 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-18 04:12:59.877953 | orchestrator | 2026-02-18 04:12:59.877964 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-18 04:12:59.877983 | orchestrator | Wednesday 18 February 2026 04:12:57 +0000 (0:00:00.936) 0:08:37.537 **** 2026-02-18 04:12:59.878003 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:12:59.878099 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:12:59.878122 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:12:59.878134 | orchestrator | 2026-02-18 04:12:59.878144 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-18 04:12:59.878155 | orchestrator | 2026-02-18 04:12:59.878166 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-18 04:12:59.878177 | orchestrator | Wednesday 18 February 2026 04:12:57 +0000 (0:00:00.713) 0:08:38.251 **** 2026-02-18 04:12:59.878199 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-18 04:12:59.878210 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-18 04:12:59.878220 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-18 04:12:59.878231 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-18 04:12:59.878242 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-18 04:12:59.878253 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-18 04:12:59.878263 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:12:59.878274 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-18 04:12:59.878285 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-18 04:12:59.878295 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-18 04:12:59.878306 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-18 04:12:59.878316 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-18 04:12:59.878327 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-18 04:12:59.878338 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:12:59.878348 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-18 04:12:59.878359 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-18 04:12:59.878370 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-18 04:12:59.878380 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-18 04:12:59.878391 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-18 04:12:59.878401 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-18 04:12:59.878412 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:12:59.878422 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-18 04:12:59.878433 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-18 04:12:59.878443 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-18 04:12:59.878454 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-18 04:12:59.878464 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-18 04:12:59.878475 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-18 04:12:59.878485 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:12:59.878496 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-18 04:12:59.878507 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-18 04:12:59.878524 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-18 04:12:59.878535 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-18 04:12:59.878546 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-18 04:12:59.878557 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-18 04:12:59.878567 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:12:59.878578 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-18 04:12:59.878588 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-18 04:12:59.878599 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-18 04:12:59.878632 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-18 04:12:59.878643 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-18 04:12:59.878654 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-18 04:12:59.878664 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:12:59.878675 | orchestrator | 2026-02-18 04:12:59.878686 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-18 04:12:59.878696 | orchestrator | 2026-02-18 04:12:59.878707 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-18 04:12:59.878725 | orchestrator | Wednesday 18 February 2026 04:12:59 +0000 (0:00:01.337) 0:08:39.588 **** 2026-02-18 04:12:59.878736 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-18 04:12:59.878747 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-18 04:12:59.878758 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:12:59.878768 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-18 04:12:59.878779 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-18 04:12:59.878789 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:12:59.878800 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-18 04:12:59.878811 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-18 04:12:59.878821 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:12:59.878832 | orchestrator | 2026-02-18 04:12:59.878854 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-18 04:13:01.524525 | orchestrator | 2026-02-18 04:13:01.524664 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-18 04:13:01.524680 | orchestrator | Wednesday 18 February 2026 04:12:59 +0000 (0:00:00.561) 0:08:40.149 **** 2026-02-18 04:13:01.524690 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:01.524699 | orchestrator | 2026-02-18 04:13:01.524707 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-18 04:13:01.524716 | orchestrator | 2026-02-18 04:13:01.524724 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-18 04:13:01.524732 | orchestrator | Wednesday 18 February 2026 04:13:00 +0000 (0:00:00.853) 0:08:41.003 **** 2026-02-18 04:13:01.524740 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:01.524748 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:01.524756 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:01.524763 | orchestrator | 2026-02-18 04:13:01.524771 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:13:01.524779 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:13:01.524790 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-18 04:13:01.524798 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-18 04:13:01.524806 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-18 04:13:01.524814 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-18 04:13:01.524822 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-18 04:13:01.524829 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-18 04:13:01.524837 | orchestrator | 2026-02-18 04:13:01.524845 | orchestrator | 2026-02-18 04:13:01.524853 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:13:01.524860 | orchestrator | Wednesday 18 February 2026 04:13:01 +0000 (0:00:00.443) 0:08:41.447 **** 2026-02-18 04:13:01.524868 | orchestrator | =============================================================================== 2026-02-18 04:13:01.524876 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.18s 2026-02-18 04:13:01.524884 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.91s 2026-02-18 04:13:01.524891 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.61s 2026-02-18 04:13:01.524922 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.47s 2026-02-18 04:13:01.524930 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.16s 2026-02-18 04:13:01.524938 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.91s 2026-02-18 04:13:01.524946 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.28s 2026-02-18 04:13:01.524966 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.77s 2026-02-18 04:13:01.524974 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.51s 2026-02-18 04:13:01.524982 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.55s 2026-02-18 04:13:01.524989 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.63s 2026-02-18 04:13:01.524997 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.07s 2026-02-18 04:13:01.525005 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.02s 2026-02-18 04:13:01.525013 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.83s 2026-02-18 04:13:01.525021 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.77s 2026-02-18 04:13:01.525028 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.11s 2026-02-18 04:13:01.525036 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.14s 2026-02-18 04:13:01.525044 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.10s 2026-02-18 04:13:01.525052 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.94s 2026-02-18 04:13:01.525060 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.87s 2026-02-18 04:13:03.787232 | orchestrator | 2026-02-18 04:13:03 | INFO  | Task 81cd6bd2-1496-4fa8-9b52-e7c7574537e1 (horizon) was prepared for execution. 2026-02-18 04:13:03.787325 | orchestrator | 2026-02-18 04:13:03 | INFO  | It takes a moment until task 81cd6bd2-1496-4fa8-9b52-e7c7574537e1 (horizon) has been started and output is visible here. 2026-02-18 04:13:10.846872 | orchestrator | 2026-02-18 04:13:10.846986 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:13:10.847003 | orchestrator | 2026-02-18 04:13:10.847015 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:13:10.847026 | orchestrator | Wednesday 18 February 2026 04:13:07 +0000 (0:00:00.254) 0:00:00.254 **** 2026-02-18 04:13:10.847038 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:10.847050 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:10.847061 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:10.847071 | orchestrator | 2026-02-18 04:13:10.847083 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:13:10.847093 | orchestrator | Wednesday 18 February 2026 04:13:08 +0000 (0:00:00.308) 0:00:00.563 **** 2026-02-18 04:13:10.847105 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-18 04:13:10.847116 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-18 04:13:10.847127 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-18 04:13:10.847138 | orchestrator | 2026-02-18 04:13:10.847150 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-18 04:13:10.847161 | orchestrator | 2026-02-18 04:13:10.847172 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-18 04:13:10.847182 | orchestrator | Wednesday 18 February 2026 04:13:08 +0000 (0:00:00.419) 0:00:00.983 **** 2026-02-18 04:13:10.847194 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:13:10.847205 | orchestrator | 2026-02-18 04:13:10.847216 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-18 04:13:10.847227 | orchestrator | Wednesday 18 February 2026 04:13:09 +0000 (0:00:00.547) 0:00:01.531 **** 2026-02-18 04:13:10.847328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:13:10.847371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:13:10.847401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:13:10.847416 | orchestrator | 2026-02-18 04:13:10.847429 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-18 04:13:10.847442 | orchestrator | Wednesday 18 February 2026 04:13:10 +0000 (0:00:01.157) 0:00:02.688 **** 2026-02-18 04:13:10.847454 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:10.847467 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:10.847479 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:10.847491 | orchestrator | 2026-02-18 04:13:10.847503 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-18 04:13:10.847515 | orchestrator | Wednesday 18 February 2026 04:13:10 +0000 (0:00:00.451) 0:00:03.140 **** 2026-02-18 04:13:10.847535 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-18 04:13:16.757272 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-18 04:13:16.757380 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-18 04:13:16.757395 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-18 04:13:16.757407 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-18 04:13:16.757418 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-18 04:13:16.757429 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-18 04:13:16.757440 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-18 04:13:16.757477 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-18 04:13:16.757489 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-18 04:13:16.757499 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-18 04:13:16.757511 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-18 04:13:16.757529 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-18 04:13:16.757548 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-18 04:13:16.757567 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-18 04:13:16.757584 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-18 04:13:16.757603 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-18 04:13:16.757623 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-18 04:13:16.757642 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-18 04:13:16.757660 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-18 04:13:16.757713 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-18 04:13:16.757725 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-18 04:13:16.757736 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-18 04:13:16.757747 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-18 04:13:16.757759 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-18 04:13:16.757772 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-18 04:13:16.757783 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-18 04:13:16.757794 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-18 04:13:16.757821 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-18 04:13:16.757835 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-18 04:13:16.757847 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-18 04:13:16.757859 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-18 04:13:16.757872 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-18 04:13:16.757886 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-18 04:13:16.757899 | orchestrator | 2026-02-18 04:13:16.757912 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:16.757924 | orchestrator | Wednesday 18 February 2026 04:13:11 +0000 (0:00:00.789) 0:00:03.929 **** 2026-02-18 04:13:16.757936 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:16.757960 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:16.757971 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:16.757984 | orchestrator | 2026-02-18 04:13:16.757996 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:16.758009 | orchestrator | Wednesday 18 February 2026 04:13:11 +0000 (0:00:00.326) 0:00:04.255 **** 2026-02-18 04:13:16.758068 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758082 | orchestrator | 2026-02-18 04:13:16.758114 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:16.758127 | orchestrator | Wednesday 18 February 2026 04:13:12 +0000 (0:00:00.295) 0:00:04.550 **** 2026-02-18 04:13:16.758140 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758152 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:16.758164 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:16.758176 | orchestrator | 2026-02-18 04:13:16.758187 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:16.758198 | orchestrator | Wednesday 18 February 2026 04:13:12 +0000 (0:00:00.299) 0:00:04.850 **** 2026-02-18 04:13:16.758208 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:16.758219 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:16.758230 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:16.758240 | orchestrator | 2026-02-18 04:13:16.758251 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:16.758262 | orchestrator | Wednesday 18 February 2026 04:13:12 +0000 (0:00:00.313) 0:00:05.164 **** 2026-02-18 04:13:16.758272 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758283 | orchestrator | 2026-02-18 04:13:16.758294 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:16.758305 | orchestrator | Wednesday 18 February 2026 04:13:12 +0000 (0:00:00.138) 0:00:05.302 **** 2026-02-18 04:13:16.758315 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758338 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:16.758350 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:16.758360 | orchestrator | 2026-02-18 04:13:16.758371 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:16.758382 | orchestrator | Wednesday 18 February 2026 04:13:13 +0000 (0:00:00.281) 0:00:05.583 **** 2026-02-18 04:13:16.758393 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:16.758403 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:16.758414 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:16.758424 | orchestrator | 2026-02-18 04:13:16.758435 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:16.758446 | orchestrator | Wednesday 18 February 2026 04:13:13 +0000 (0:00:00.490) 0:00:06.074 **** 2026-02-18 04:13:16.758456 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758467 | orchestrator | 2026-02-18 04:13:16.758477 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:16.758488 | orchestrator | Wednesday 18 February 2026 04:13:13 +0000 (0:00:00.133) 0:00:06.207 **** 2026-02-18 04:13:16.758499 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758509 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:16.758520 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:16.758530 | orchestrator | 2026-02-18 04:13:16.758541 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:16.758552 | orchestrator | Wednesday 18 February 2026 04:13:14 +0000 (0:00:00.313) 0:00:06.520 **** 2026-02-18 04:13:16.758562 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:16.758573 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:16.758585 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:16.758606 | orchestrator | 2026-02-18 04:13:16.758626 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:16.758647 | orchestrator | Wednesday 18 February 2026 04:13:14 +0000 (0:00:00.322) 0:00:06.842 **** 2026-02-18 04:13:16.758663 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758716 | orchestrator | 2026-02-18 04:13:16.758736 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:16.758747 | orchestrator | Wednesday 18 February 2026 04:13:14 +0000 (0:00:00.134) 0:00:06.977 **** 2026-02-18 04:13:16.758758 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758769 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:16.758780 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:16.758790 | orchestrator | 2026-02-18 04:13:16.758801 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:16.758811 | orchestrator | Wednesday 18 February 2026 04:13:15 +0000 (0:00:00.458) 0:00:07.436 **** 2026-02-18 04:13:16.758822 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:16.758833 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:16.758850 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:16.758861 | orchestrator | 2026-02-18 04:13:16.758872 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:16.758883 | orchestrator | Wednesday 18 February 2026 04:13:15 +0000 (0:00:00.324) 0:00:07.761 **** 2026-02-18 04:13:16.758893 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758904 | orchestrator | 2026-02-18 04:13:16.758915 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:16.758925 | orchestrator | Wednesday 18 February 2026 04:13:15 +0000 (0:00:00.141) 0:00:07.902 **** 2026-02-18 04:13:16.758936 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.758947 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:16.758957 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:16.758968 | orchestrator | 2026-02-18 04:13:16.758979 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:16.758990 | orchestrator | Wednesday 18 February 2026 04:13:15 +0000 (0:00:00.307) 0:00:08.210 **** 2026-02-18 04:13:16.759000 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:16.759011 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:16.759022 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:16.759033 | orchestrator | 2026-02-18 04:13:16.759043 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:16.759054 | orchestrator | Wednesday 18 February 2026 04:13:16 +0000 (0:00:00.296) 0:00:08.506 **** 2026-02-18 04:13:16.759065 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.759075 | orchestrator | 2026-02-18 04:13:16.759086 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:16.759097 | orchestrator | Wednesday 18 February 2026 04:13:16 +0000 (0:00:00.339) 0:00:08.846 **** 2026-02-18 04:13:16.759107 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:16.759118 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:16.759128 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:16.759139 | orchestrator | 2026-02-18 04:13:16.759150 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:16.759168 | orchestrator | Wednesday 18 February 2026 04:13:16 +0000 (0:00:00.301) 0:00:09.147 **** 2026-02-18 04:13:30.682493 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:30.682600 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:30.682614 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:30.682625 | orchestrator | 2026-02-18 04:13:30.682637 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:30.682649 | orchestrator | Wednesday 18 February 2026 04:13:17 +0000 (0:00:00.313) 0:00:09.461 **** 2026-02-18 04:13:30.682660 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.682672 | orchestrator | 2026-02-18 04:13:30.682684 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:30.682694 | orchestrator | Wednesday 18 February 2026 04:13:17 +0000 (0:00:00.135) 0:00:09.596 **** 2026-02-18 04:13:30.682705 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.682716 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:30.682784 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:30.682795 | orchestrator | 2026-02-18 04:13:30.682806 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:30.682845 | orchestrator | Wednesday 18 February 2026 04:13:17 +0000 (0:00:00.317) 0:00:09.913 **** 2026-02-18 04:13:30.682857 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:30.682868 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:30.682879 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:30.682889 | orchestrator | 2026-02-18 04:13:30.682912 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:30.682924 | orchestrator | Wednesday 18 February 2026 04:13:18 +0000 (0:00:00.505) 0:00:10.419 **** 2026-02-18 04:13:30.682934 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.682945 | orchestrator | 2026-02-18 04:13:30.682955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:30.682966 | orchestrator | Wednesday 18 February 2026 04:13:18 +0000 (0:00:00.137) 0:00:10.557 **** 2026-02-18 04:13:30.682977 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.682987 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:30.682998 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:30.683009 | orchestrator | 2026-02-18 04:13:30.683019 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:30.683030 | orchestrator | Wednesday 18 February 2026 04:13:18 +0000 (0:00:00.312) 0:00:10.869 **** 2026-02-18 04:13:30.683042 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:30.683055 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:30.683066 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:30.683078 | orchestrator | 2026-02-18 04:13:30.683091 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:30.683103 | orchestrator | Wednesday 18 February 2026 04:13:18 +0000 (0:00:00.308) 0:00:11.178 **** 2026-02-18 04:13:30.683115 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.683127 | orchestrator | 2026-02-18 04:13:30.683139 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:30.683152 | orchestrator | Wednesday 18 February 2026 04:13:18 +0000 (0:00:00.130) 0:00:11.309 **** 2026-02-18 04:13:30.683164 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.683177 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:30.683190 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:30.683201 | orchestrator | 2026-02-18 04:13:30.683214 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-18 04:13:30.683225 | orchestrator | Wednesday 18 February 2026 04:13:19 +0000 (0:00:00.507) 0:00:11.816 **** 2026-02-18 04:13:30.683237 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:13:30.683249 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:13:30.683261 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:13:30.683273 | orchestrator | 2026-02-18 04:13:30.683285 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-18 04:13:30.683297 | orchestrator | Wednesday 18 February 2026 04:13:19 +0000 (0:00:00.319) 0:00:12.136 **** 2026-02-18 04:13:30.683309 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.683321 | orchestrator | 2026-02-18 04:13:30.683333 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-18 04:13:30.683345 | orchestrator | Wednesday 18 February 2026 04:13:19 +0000 (0:00:00.129) 0:00:12.265 **** 2026-02-18 04:13:30.683372 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.683385 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:30.683395 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:30.683406 | orchestrator | 2026-02-18 04:13:30.683416 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-18 04:13:30.683427 | orchestrator | Wednesday 18 February 2026 04:13:20 +0000 (0:00:00.313) 0:00:12.579 **** 2026-02-18 04:13:30.683437 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:13:30.683448 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:13:30.683458 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:13:30.683469 | orchestrator | 2026-02-18 04:13:30.683479 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-18 04:13:30.683498 | orchestrator | Wednesday 18 February 2026 04:13:21 +0000 (0:00:01.795) 0:00:14.374 **** 2026-02-18 04:13:30.683519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-18 04:13:30.683531 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-18 04:13:30.683541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-18 04:13:30.683552 | orchestrator | 2026-02-18 04:13:30.683562 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-18 04:13:30.683573 | orchestrator | Wednesday 18 February 2026 04:13:23 +0000 (0:00:01.913) 0:00:16.288 **** 2026-02-18 04:13:30.683584 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-18 04:13:30.683595 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-18 04:13:30.683606 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-18 04:13:30.683617 | orchestrator | 2026-02-18 04:13:30.683627 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-18 04:13:30.683655 | orchestrator | Wednesday 18 February 2026 04:13:25 +0000 (0:00:01.957) 0:00:18.246 **** 2026-02-18 04:13:30.683667 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-18 04:13:30.683678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-18 04:13:30.683688 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-18 04:13:30.683699 | orchestrator | 2026-02-18 04:13:30.683710 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-18 04:13:30.683743 | orchestrator | Wednesday 18 February 2026 04:13:27 +0000 (0:00:01.574) 0:00:19.820 **** 2026-02-18 04:13:30.683754 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.683765 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:30.683775 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:30.683786 | orchestrator | 2026-02-18 04:13:30.683796 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-18 04:13:30.683807 | orchestrator | Wednesday 18 February 2026 04:13:27 +0000 (0:00:00.487) 0:00:20.308 **** 2026-02-18 04:13:30.683817 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:30.683828 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:30.683838 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:30.683849 | orchestrator | 2026-02-18 04:13:30.683859 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-18 04:13:30.683870 | orchestrator | Wednesday 18 February 2026 04:13:28 +0000 (0:00:00.295) 0:00:20.603 **** 2026-02-18 04:13:30.683880 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:13:30.683891 | orchestrator | 2026-02-18 04:13:30.683902 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-18 04:13:30.683913 | orchestrator | Wednesday 18 February 2026 04:13:28 +0000 (0:00:00.574) 0:00:21.178 **** 2026-02-18 04:13:30.683937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:13:30.683973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:13:31.305303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:13:31.305425 | orchestrator | 2026-02-18 04:13:31.305442 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-18 04:13:31.305454 | orchestrator | Wednesday 18 February 2026 04:13:30 +0000 (0:00:01.887) 0:00:23.065 **** 2026-02-18 04:13:31.305487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 04:13:31.305508 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:31.305528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 04:13:31.305540 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:31.305560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 04:13:33.749327 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:33.749431 | orchestrator | 2026-02-18 04:13:33.749447 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-18 04:13:33.749460 | orchestrator | Wednesday 18 February 2026 04:13:31 +0000 (0:00:00.628) 0:00:23.694 **** 2026-02-18 04:13:33.749493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 04:13:33.749509 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:13:33.749539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 04:13:33.749574 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:13:33.749626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 04:13:33.749640 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:13:33.749651 | orchestrator | 2026-02-18 04:13:33.749662 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-18 04:13:33.749673 | orchestrator | Wednesday 18 February 2026 04:13:32 +0000 (0:00:00.831) 0:00:24.526 **** 2026-02-18 04:13:33.749701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:14:17.735038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:14:17.735219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 04:14:17.735240 | orchestrator | 2026-02-18 04:14:17.735254 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-18 04:14:17.735266 | orchestrator | Wednesday 18 February 2026 04:13:33 +0000 (0:00:01.613) 0:00:26.140 **** 2026-02-18 04:14:17.735277 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:14:17.735289 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:14:17.735300 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:14:17.735311 | orchestrator | 2026-02-18 04:14:17.735321 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-18 04:14:17.735332 | orchestrator | Wednesday 18 February 2026 04:13:34 +0000 (0:00:00.301) 0:00:26.441 **** 2026-02-18 04:14:17.735343 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:14:17.735354 | orchestrator | 2026-02-18 04:14:17.735365 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-18 04:14:17.735375 | orchestrator | Wednesday 18 February 2026 04:13:34 +0000 (0:00:00.534) 0:00:26.975 **** 2026-02-18 04:14:17.735386 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:14:17.735396 | orchestrator | 2026-02-18 04:14:17.735407 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-18 04:14:17.735417 | orchestrator | Wednesday 18 February 2026 04:13:36 +0000 (0:00:02.370) 0:00:29.345 **** 2026-02-18 04:14:17.735428 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:14:17.735443 | orchestrator | 2026-02-18 04:14:17.735463 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-18 04:14:17.735481 | orchestrator | Wednesday 18 February 2026 04:13:39 +0000 (0:00:02.778) 0:00:32.123 **** 2026-02-18 04:14:17.735500 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:14:17.735520 | orchestrator | 2026-02-18 04:14:17.735554 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-18 04:14:17.735575 | orchestrator | Wednesday 18 February 2026 04:13:56 +0000 (0:00:16.894) 0:00:49.018 **** 2026-02-18 04:14:17.735595 | orchestrator | 2026-02-18 04:14:17.735615 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-18 04:14:17.735634 | orchestrator | Wednesday 18 February 2026 04:13:56 +0000 (0:00:00.077) 0:00:49.095 **** 2026-02-18 04:14:17.735654 | orchestrator | 2026-02-18 04:14:17.735674 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-18 04:14:17.735696 | orchestrator | Wednesday 18 February 2026 04:13:56 +0000 (0:00:00.065) 0:00:49.161 **** 2026-02-18 04:14:17.735715 | orchestrator | 2026-02-18 04:14:17.735729 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-18 04:14:17.735741 | orchestrator | Wednesday 18 February 2026 04:13:56 +0000 (0:00:00.072) 0:00:49.233 **** 2026-02-18 04:14:17.735754 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:14:17.735766 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:14:17.735779 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:14:17.735790 | orchestrator | 2026-02-18 04:14:17.735803 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:14:17.735816 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-18 04:14:17.735898 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-18 04:14:17.735914 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-18 04:14:17.735925 | orchestrator | 2026-02-18 04:14:17.735936 | orchestrator | 2026-02-18 04:14:17.735946 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:14:17.735956 | orchestrator | Wednesday 18 February 2026 04:14:17 +0000 (0:00:20.868) 0:01:10.102 **** 2026-02-18 04:14:17.735967 | orchestrator | =============================================================================== 2026-02-18 04:14:17.735977 | orchestrator | horizon : Restart horizon container ------------------------------------ 20.87s 2026-02-18 04:14:17.735988 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.89s 2026-02-18 04:14:17.735998 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.78s 2026-02-18 04:14:17.736009 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.37s 2026-02-18 04:14:17.736020 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.96s 2026-02-18 04:14:17.736039 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.91s 2026-02-18 04:14:17.736050 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.89s 2026-02-18 04:14:17.736060 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.80s 2026-02-18 04:14:17.736071 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.61s 2026-02-18 04:14:17.736081 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.57s 2026-02-18 04:14:17.736092 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.16s 2026-02-18 04:14:17.736102 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-02-18 04:14:17.736113 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-02-18 04:14:17.736135 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2026-02-18 04:14:18.104190 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-02-18 04:14:18.104292 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-18 04:14:18.104306 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-02-18 04:14:18.104344 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2026-02-18 04:14:18.104356 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-02-18 04:14:18.104367 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-02-18 04:14:20.437964 | orchestrator | 2026-02-18 04:14:20 | INFO  | Task f4c34bdb-3e4e-4b47-8ae9-205efd7048ef (skyline) was prepared for execution. 2026-02-18 04:14:20.438233 | orchestrator | 2026-02-18 04:14:20 | INFO  | It takes a moment until task f4c34bdb-3e4e-4b47-8ae9-205efd7048ef (skyline) has been started and output is visible here. 2026-02-18 04:14:52.310750 | orchestrator | 2026-02-18 04:14:52.310866 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:14:52.310882 | orchestrator | 2026-02-18 04:14:52.310894 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:14:52.310905 | orchestrator | Wednesday 18 February 2026 04:14:24 +0000 (0:00:00.271) 0:00:00.271 **** 2026-02-18 04:14:52.310916 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:14:52.310928 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:14:52.310939 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:14:52.310950 | orchestrator | 2026-02-18 04:14:52.310961 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:14:52.310971 | orchestrator | Wednesday 18 February 2026 04:14:24 +0000 (0:00:00.313) 0:00:00.585 **** 2026-02-18 04:14:52.310982 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-18 04:14:52.311085 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-18 04:14:52.311100 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-18 04:14:52.311111 | orchestrator | 2026-02-18 04:14:52.311122 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-18 04:14:52.311133 | orchestrator | 2026-02-18 04:14:52.311143 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-18 04:14:52.311154 | orchestrator | Wednesday 18 February 2026 04:14:25 +0000 (0:00:00.413) 0:00:00.998 **** 2026-02-18 04:14:52.311165 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:14:52.311177 | orchestrator | 2026-02-18 04:14:52.311188 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-18 04:14:52.311199 | orchestrator | Wednesday 18 February 2026 04:14:25 +0000 (0:00:00.529) 0:00:01.527 **** 2026-02-18 04:14:52.311209 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-18 04:14:52.311220 | orchestrator | 2026-02-18 04:14:52.311231 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-18 04:14:52.311241 | orchestrator | Wednesday 18 February 2026 04:14:29 +0000 (0:00:03.623) 0:00:05.151 **** 2026-02-18 04:14:52.311253 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-18 04:14:52.311264 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-18 04:14:52.311276 | orchestrator | 2026-02-18 04:14:52.311289 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-18 04:14:52.311302 | orchestrator | Wednesday 18 February 2026 04:14:36 +0000 (0:00:06.730) 0:00:11.881 **** 2026-02-18 04:14:52.311315 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:14:52.311329 | orchestrator | 2026-02-18 04:14:52.311342 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-18 04:14:52.311355 | orchestrator | Wednesday 18 February 2026 04:14:39 +0000 (0:00:03.406) 0:00:15.287 **** 2026-02-18 04:14:52.311368 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:14:52.311380 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-18 04:14:52.311392 | orchestrator | 2026-02-18 04:14:52.311404 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-18 04:14:52.311446 | orchestrator | Wednesday 18 February 2026 04:14:43 +0000 (0:00:04.149) 0:00:19.437 **** 2026-02-18 04:14:52.311459 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:14:52.311471 | orchestrator | 2026-02-18 04:14:52.311484 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-18 04:14:52.311496 | orchestrator | Wednesday 18 February 2026 04:14:47 +0000 (0:00:03.325) 0:00:22.763 **** 2026-02-18 04:14:52.311515 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-18 04:14:52.311542 | orchestrator | 2026-02-18 04:14:52.311582 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-18 04:14:52.311602 | orchestrator | Wednesday 18 February 2026 04:14:50 +0000 (0:00:03.909) 0:00:26.673 **** 2026-02-18 04:14:52.311626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:52.311676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:52.311698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:52.311720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:52.311762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:52.311795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:56.084085 | orchestrator | 2026-02-18 04:14:56.084205 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-18 04:14:56.084233 | orchestrator | Wednesday 18 February 2026 04:14:52 +0000 (0:00:01.319) 0:00:27.992 **** 2026-02-18 04:14:56.084246 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:14:56.084258 | orchestrator | 2026-02-18 04:14:56.084269 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-18 04:14:56.084280 | orchestrator | Wednesday 18 February 2026 04:14:53 +0000 (0:00:00.706) 0:00:28.699 **** 2026-02-18 04:14:56.084294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:56.084333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:56.084360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:56.084422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:56.084439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:56.084451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:14:56.084470 | orchestrator | 2026-02-18 04:14:56.084482 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-18 04:14:56.084493 | orchestrator | Wednesday 18 February 2026 04:14:55 +0000 (0:00:02.484) 0:00:31.183 **** 2026-02-18 04:14:56.084510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 04:14:56.084524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 04:14:56.084538 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:14:56.084560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.360373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.360534 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:14:57.361260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.361285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.361298 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:14:57.361310 | orchestrator | 2026-02-18 04:14:57.361322 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-18 04:14:57.361334 | orchestrator | Wednesday 18 February 2026 04:14:56 +0000 (0:00:00.591) 0:00:31.775 **** 2026-02-18 04:14:57.361346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.361392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.361405 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:14:57.361422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.361434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.361445 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:14:57.361456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-18 04:14:57.361485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-18 04:15:05.726094 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:15:05.726214 | orchestrator | 2026-02-18 04:15:05.726231 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-18 04:15:05.726258 | orchestrator | Wednesday 18 February 2026 04:14:57 +0000 (0:00:01.271) 0:00:33.047 **** 2026-02-18 04:15:05.726299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:05.726317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:05.726329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:05.726361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:05.726395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:05.726414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:05.726425 | orchestrator | 2026-02-18 04:15:05.726437 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-18 04:15:05.726448 | orchestrator | Wednesday 18 February 2026 04:14:59 +0000 (0:00:02.448) 0:00:35.495 **** 2026-02-18 04:15:05.726459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-18 04:15:05.726470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-18 04:15:05.726481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-18 04:15:05.726492 | orchestrator | 2026-02-18 04:15:05.726503 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-18 04:15:05.726514 | orchestrator | Wednesday 18 February 2026 04:15:01 +0000 (0:00:01.607) 0:00:37.102 **** 2026-02-18 04:15:05.726525 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-18 04:15:05.726537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-18 04:15:05.726558 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-18 04:15:05.726572 | orchestrator | 2026-02-18 04:15:05.726585 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-18 04:15:05.726598 | orchestrator | Wednesday 18 February 2026 04:15:03 +0000 (0:00:02.108) 0:00:39.210 **** 2026-02-18 04:15:05.726611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:05.726634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700684 | orchestrator | 2026-02-18 04:15:07.700697 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-18 04:15:07.700708 | orchestrator | Wednesday 18 February 2026 04:15:05 +0000 (0:00:02.202) 0:00:41.413 **** 2026-02-18 04:15:07.700719 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:15:07.700731 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:15:07.700742 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:15:07.700753 | orchestrator | 2026-02-18 04:15:07.700780 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-18 04:15:07.700792 | orchestrator | Wednesday 18 February 2026 04:15:06 +0000 (0:00:00.310) 0:00:41.724 **** 2026-02-18 04:15:07.700810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:07.700880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:39.625408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-18 04:15:39.625551 | orchestrator | 2026-02-18 04:15:39.625568 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-18 04:15:39.625581 | orchestrator | Wednesday 18 February 2026 04:15:07 +0000 (0:00:01.665) 0:00:43.389 **** 2026-02-18 04:15:39.625592 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:15:39.625603 | orchestrator | 2026-02-18 04:15:39.625614 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-18 04:15:39.625625 | orchestrator | Wednesday 18 February 2026 04:15:09 +0000 (0:00:01.899) 0:00:45.288 **** 2026-02-18 04:15:39.625635 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:15:39.625646 | orchestrator | 2026-02-18 04:15:39.625656 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-18 04:15:39.625667 | orchestrator | Wednesday 18 February 2026 04:15:11 +0000 (0:00:02.213) 0:00:47.501 **** 2026-02-18 04:15:39.625677 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:15:39.625688 | orchestrator | 2026-02-18 04:15:39.625698 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-18 04:15:39.625709 | orchestrator | Wednesday 18 February 2026 04:15:19 +0000 (0:00:07.447) 0:00:54.949 **** 2026-02-18 04:15:39.625720 | orchestrator | 2026-02-18 04:15:39.625731 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-18 04:15:39.625741 | orchestrator | Wednesday 18 February 2026 04:15:19 +0000 (0:00:00.067) 0:00:55.016 **** 2026-02-18 04:15:39.625752 | orchestrator | 2026-02-18 04:15:39.625762 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-18 04:15:39.625773 | orchestrator | Wednesday 18 February 2026 04:15:19 +0000 (0:00:00.068) 0:00:55.084 **** 2026-02-18 04:15:39.625783 | orchestrator | 2026-02-18 04:15:39.625794 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-18 04:15:39.625804 | orchestrator | Wednesday 18 February 2026 04:15:19 +0000 (0:00:00.071) 0:00:55.156 **** 2026-02-18 04:15:39.625815 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:15:39.625826 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:15:39.625836 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:15:39.625846 | orchestrator | 2026-02-18 04:15:39.625857 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-18 04:15:39.625868 | orchestrator | Wednesday 18 February 2026 04:15:25 +0000 (0:00:05.744) 0:01:00.901 **** 2026-02-18 04:15:39.625878 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:15:39.625889 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:15:39.625899 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:15:39.625910 | orchestrator | 2026-02-18 04:15:39.625920 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:15:39.625932 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 04:15:39.625944 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 04:15:39.625955 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 04:15:39.625965 | orchestrator | 2026-02-18 04:15:39.625976 | orchestrator | 2026-02-18 04:15:39.625986 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:15:39.625997 | orchestrator | Wednesday 18 February 2026 04:15:39 +0000 (0:00:14.098) 0:01:14.999 **** 2026-02-18 04:15:39.626007 | orchestrator | =============================================================================== 2026-02-18 04:15:39.626091 | orchestrator | skyline : Restart skyline-console container ---------------------------- 14.10s 2026-02-18 04:15:39.626103 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.45s 2026-02-18 04:15:39.626167 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.73s 2026-02-18 04:15:39.626180 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 5.74s 2026-02-18 04:15:39.626204 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.15s 2026-02-18 04:15:39.626216 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.91s 2026-02-18 04:15:39.626226 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.62s 2026-02-18 04:15:39.626237 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.41s 2026-02-18 04:15:39.626265 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.33s 2026-02-18 04:15:39.626277 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.48s 2026-02-18 04:15:39.626288 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.45s 2026-02-18 04:15:39.626299 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.21s 2026-02-18 04:15:39.626309 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.20s 2026-02-18 04:15:39.626320 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.11s 2026-02-18 04:15:39.626331 | orchestrator | skyline : Creating Skyline database ------------------------------------- 1.90s 2026-02-18 04:15:39.626342 | orchestrator | skyline : Check skyline container --------------------------------------- 1.67s 2026-02-18 04:15:39.626352 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.61s 2026-02-18 04:15:39.626363 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.32s 2026-02-18 04:15:39.626374 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.27s 2026-02-18 04:15:39.626384 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.71s 2026-02-18 04:15:41.951391 | orchestrator | 2026-02-18 04:15:41 | INFO  | Task 40cc86d3-925f-4c2a-ace0-78bb498d3cc9 (glance) was prepared for execution. 2026-02-18 04:15:41.951517 | orchestrator | 2026-02-18 04:15:41 | INFO  | It takes a moment until task 40cc86d3-925f-4c2a-ace0-78bb498d3cc9 (glance) has been started and output is visible here. 2026-02-18 04:16:16.601314 | orchestrator | 2026-02-18 04:16:16.601460 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:16:16.601481 | orchestrator | 2026-02-18 04:16:16.601493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:16:16.601504 | orchestrator | Wednesday 18 February 2026 04:15:46 +0000 (0:00:00.257) 0:00:00.257 **** 2026-02-18 04:16:16.601515 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:16:16.601528 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:16:16.601539 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:16:16.601549 | orchestrator | 2026-02-18 04:16:16.601560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:16:16.601571 | orchestrator | Wednesday 18 February 2026 04:15:46 +0000 (0:00:00.301) 0:00:00.559 **** 2026-02-18 04:16:16.601582 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-18 04:16:16.601593 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-18 04:16:16.601604 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-18 04:16:16.601615 | orchestrator | 2026-02-18 04:16:16.601625 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-18 04:16:16.601636 | orchestrator | 2026-02-18 04:16:16.601647 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-18 04:16:16.601658 | orchestrator | Wednesday 18 February 2026 04:15:46 +0000 (0:00:00.436) 0:00:00.995 **** 2026-02-18 04:16:16.601692 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:16:16.601704 | orchestrator | 2026-02-18 04:16:16.601715 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-18 04:16:16.601725 | orchestrator | Wednesday 18 February 2026 04:15:47 +0000 (0:00:00.560) 0:00:01.556 **** 2026-02-18 04:16:16.601736 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-18 04:16:16.601747 | orchestrator | 2026-02-18 04:16:16.601757 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-18 04:16:16.601768 | orchestrator | Wednesday 18 February 2026 04:15:50 +0000 (0:00:03.563) 0:00:05.120 **** 2026-02-18 04:16:16.601779 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-18 04:16:16.601789 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-18 04:16:16.601800 | orchestrator | 2026-02-18 04:16:16.601813 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-18 04:16:16.601825 | orchestrator | Wednesday 18 February 2026 04:15:57 +0000 (0:00:06.674) 0:00:11.795 **** 2026-02-18 04:16:16.601837 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:16:16.601851 | orchestrator | 2026-02-18 04:16:16.601864 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-18 04:16:16.601876 | orchestrator | Wednesday 18 February 2026 04:16:00 +0000 (0:00:03.414) 0:00:15.209 **** 2026-02-18 04:16:16.601888 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:16:16.601901 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-18 04:16:16.601914 | orchestrator | 2026-02-18 04:16:16.601926 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-18 04:16:16.601938 | orchestrator | Wednesday 18 February 2026 04:16:05 +0000 (0:00:04.099) 0:00:19.308 **** 2026-02-18 04:16:16.601950 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:16:16.601963 | orchestrator | 2026-02-18 04:16:16.601976 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-18 04:16:16.601988 | orchestrator | Wednesday 18 February 2026 04:16:08 +0000 (0:00:03.462) 0:00:22.771 **** 2026-02-18 04:16:16.602015 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-18 04:16:16.602096 | orchestrator | 2026-02-18 04:16:16.602108 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-18 04:16:16.602121 | orchestrator | Wednesday 18 February 2026 04:16:12 +0000 (0:00:03.980) 0:00:26.752 **** 2026-02-18 04:16:16.602165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:16:16.602195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:16:16.602214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:16:16.602227 | orchestrator | 2026-02-18 04:16:16.602267 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-18 04:16:16.602280 | orchestrator | Wednesday 18 February 2026 04:16:15 +0000 (0:00:03.334) 0:00:30.087 **** 2026-02-18 04:16:16.602292 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:16:16.602310 | orchestrator | 2026-02-18 04:16:16.602329 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-18 04:16:31.359982 | orchestrator | Wednesday 18 February 2026 04:16:16 +0000 (0:00:00.715) 0:00:30.803 **** 2026-02-18 04:16:31.360086 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:16:31.360102 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:16:31.360114 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:16:31.360139 | orchestrator | 2026-02-18 04:16:31.360151 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-18 04:16:31.360163 | orchestrator | Wednesday 18 February 2026 04:16:19 +0000 (0:00:03.354) 0:00:34.158 **** 2026-02-18 04:16:31.360175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:16:31.360187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:16:31.360208 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:16:31.360220 | orchestrator | 2026-02-18 04:16:31.360231 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-18 04:16:31.360241 | orchestrator | Wednesday 18 February 2026 04:16:21 +0000 (0:00:01.555) 0:00:35.713 **** 2026-02-18 04:16:31.360252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:16:31.360263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:16:31.360274 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:16:31.360341 | orchestrator | 2026-02-18 04:16:31.360352 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-18 04:16:31.360363 | orchestrator | Wednesday 18 February 2026 04:16:22 +0000 (0:00:01.404) 0:00:37.118 **** 2026-02-18 04:16:31.360374 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:16:31.360386 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:16:31.360397 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:16:31.360408 | orchestrator | 2026-02-18 04:16:31.360418 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-18 04:16:31.360429 | orchestrator | Wednesday 18 February 2026 04:16:23 +0000 (0:00:00.691) 0:00:37.809 **** 2026-02-18 04:16:31.360440 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:16:31.360451 | orchestrator | 2026-02-18 04:16:31.360462 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-18 04:16:31.360473 | orchestrator | Wednesday 18 February 2026 04:16:23 +0000 (0:00:00.153) 0:00:37.962 **** 2026-02-18 04:16:31.360484 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:16:31.360495 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:16:31.360506 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:16:31.360517 | orchestrator | 2026-02-18 04:16:31.360529 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-18 04:16:31.360542 | orchestrator | Wednesday 18 February 2026 04:16:24 +0000 (0:00:00.297) 0:00:38.259 **** 2026-02-18 04:16:31.360554 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:16:31.360566 | orchestrator | 2026-02-18 04:16:31.360578 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-18 04:16:31.360590 | orchestrator | Wednesday 18 February 2026 04:16:24 +0000 (0:00:00.726) 0:00:38.986 **** 2026-02-18 04:16:31.360626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:16:31.360689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:16:31.360712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:16:31.360736 | orchestrator | 2026-02-18 04:16:31.360749 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-18 04:16:31.360761 | orchestrator | Wednesday 18 February 2026 04:16:28 +0000 (0:00:03.719) 0:00:42.706 **** 2026-02-18 04:16:31.360785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 04:16:34.745804 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:16:34.745971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 04:16:34.746083 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:16:34.746101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 04:16:34.746113 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:16:34.746125 | orchestrator | 2026-02-18 04:16:34.746137 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-18 04:16:34.746149 | orchestrator | Wednesday 18 February 2026 04:16:31 +0000 (0:00:02.856) 0:00:45.563 **** 2026-02-18 04:16:34.746185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 04:16:34.746206 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:16:34.746224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 04:16:34.746237 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:16:34.746271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 04:17:07.790300 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.790496 | orchestrator | 2026-02-18 04:17:07.790516 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-18 04:17:07.790530 | orchestrator | Wednesday 18 February 2026 04:16:34 +0000 (0:00:03.382) 0:00:48.946 **** 2026-02-18 04:17:07.790541 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:17:07.790575 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:17:07.790587 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.790598 | orchestrator | 2026-02-18 04:17:07.790609 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-18 04:17:07.790620 | orchestrator | Wednesday 18 February 2026 04:16:37 +0000 (0:00:03.114) 0:00:52.060 **** 2026-02-18 04:17:07.790650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:17:07.790667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:17:07.790706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:17:07.790728 | orchestrator | 2026-02-18 04:17:07.790740 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-18 04:17:07.790751 | orchestrator | Wednesday 18 February 2026 04:16:41 +0000 (0:00:03.861) 0:00:55.921 **** 2026-02-18 04:17:07.790761 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:17:07.790772 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:17:07.790783 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:17:07.790793 | orchestrator | 2026-02-18 04:17:07.790804 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-18 04:17:07.790815 | orchestrator | Wednesday 18 February 2026 04:16:47 +0000 (0:00:05.378) 0:01:01.300 **** 2026-02-18 04:17:07.790826 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:17:07.790837 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:17:07.790851 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.790863 | orchestrator | 2026-02-18 04:17:07.790876 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-18 04:17:07.790889 | orchestrator | Wednesday 18 February 2026 04:16:50 +0000 (0:00:03.317) 0:01:04.618 **** 2026-02-18 04:17:07.790902 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:17:07.790914 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:17:07.790927 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.790939 | orchestrator | 2026-02-18 04:17:07.790952 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-18 04:17:07.790965 | orchestrator | Wednesday 18 February 2026 04:16:53 +0000 (0:00:03.131) 0:01:07.749 **** 2026-02-18 04:17:07.790977 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:17:07.790989 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:17:07.791002 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.791014 | orchestrator | 2026-02-18 04:17:07.791026 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-18 04:17:07.791039 | orchestrator | Wednesday 18 February 2026 04:16:56 +0000 (0:00:03.238) 0:01:10.987 **** 2026-02-18 04:17:07.791052 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:17:07.791064 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:17:07.791076 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.791088 | orchestrator | 2026-02-18 04:17:07.791101 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-18 04:17:07.791113 | orchestrator | Wednesday 18 February 2026 04:17:00 +0000 (0:00:03.241) 0:01:14.229 **** 2026-02-18 04:17:07.791125 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:17:07.791138 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:17:07.791157 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.791170 | orchestrator | 2026-02-18 04:17:07.791182 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-18 04:17:07.791195 | orchestrator | Wednesday 18 February 2026 04:17:00 +0000 (0:00:00.523) 0:01:14.752 **** 2026-02-18 04:17:07.791206 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-18 04:17:07.791217 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:17:07.791228 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-18 04:17:07.791239 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:17:07.791250 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-18 04:17:07.791261 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:17:07.791271 | orchestrator | 2026-02-18 04:17:07.791282 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-18 04:17:07.791293 | orchestrator | Wednesday 18 February 2026 04:17:03 +0000 (0:00:03.160) 0:01:17.913 **** 2026-02-18 04:17:07.791304 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:17:07.791315 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:17:07.791326 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:17:07.791336 | orchestrator | 2026-02-18 04:17:07.791347 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-18 04:17:07.791364 | orchestrator | Wednesday 18 February 2026 04:17:07 +0000 (0:00:04.072) 0:01:21.985 **** 2026-02-18 04:18:19.167475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:18:19.167663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:18:19.167793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 04:18:19.167817 | orchestrator | 2026-02-18 04:18:19.167830 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-18 04:18:19.167842 | orchestrator | Wednesday 18 February 2026 04:17:11 +0000 (0:00:03.594) 0:01:25.580 **** 2026-02-18 04:18:19.167853 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:18:19.167865 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:18:19.167875 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:18:19.167886 | orchestrator | 2026-02-18 04:18:19.167897 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-18 04:18:19.167908 | orchestrator | Wednesday 18 February 2026 04:17:11 +0000 (0:00:00.471) 0:01:26.051 **** 2026-02-18 04:18:19.167919 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:18:19.167930 | orchestrator | 2026-02-18 04:18:19.167940 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-18 04:18:19.167951 | orchestrator | Wednesday 18 February 2026 04:17:14 +0000 (0:00:02.186) 0:01:28.238 **** 2026-02-18 04:18:19.167962 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:18:19.167973 | orchestrator | 2026-02-18 04:18:19.167984 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-18 04:18:19.167995 | orchestrator | Wednesday 18 February 2026 04:17:16 +0000 (0:00:02.314) 0:01:30.553 **** 2026-02-18 04:18:19.168005 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:18:19.168023 | orchestrator | 2026-02-18 04:18:19.168034 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-18 04:18:19.168045 | orchestrator | Wednesday 18 February 2026 04:17:18 +0000 (0:00:02.188) 0:01:32.741 **** 2026-02-18 04:18:19.168056 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:18:19.168067 | orchestrator | 2026-02-18 04:18:19.168077 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-18 04:18:19.168088 | orchestrator | Wednesday 18 February 2026 04:17:46 +0000 (0:00:28.266) 0:02:01.007 **** 2026-02-18 04:18:19.168099 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:18:19.168110 | orchestrator | 2026-02-18 04:18:19.168121 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-18 04:18:19.168131 | orchestrator | Wednesday 18 February 2026 04:17:48 +0000 (0:00:02.203) 0:02:03.211 **** 2026-02-18 04:18:19.168142 | orchestrator | 2026-02-18 04:18:19.168153 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-18 04:18:19.168164 | orchestrator | Wednesday 18 February 2026 04:17:49 +0000 (0:00:00.066) 0:02:03.277 **** 2026-02-18 04:18:19.168175 | orchestrator | 2026-02-18 04:18:19.168186 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-18 04:18:19.168196 | orchestrator | Wednesday 18 February 2026 04:17:49 +0000 (0:00:00.067) 0:02:03.344 **** 2026-02-18 04:18:19.168207 | orchestrator | 2026-02-18 04:18:19.168218 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-18 04:18:19.168228 | orchestrator | Wednesday 18 February 2026 04:17:49 +0000 (0:00:00.069) 0:02:03.414 **** 2026-02-18 04:18:19.168239 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:18:19.168250 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:18:19.168261 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:18:19.168271 | orchestrator | 2026-02-18 04:18:19.168282 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:18:19.168294 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-18 04:18:19.168306 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-18 04:18:19.168317 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-18 04:18:19.168328 | orchestrator | 2026-02-18 04:18:19.168339 | orchestrator | 2026-02-18 04:18:19.168350 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:18:19.168361 | orchestrator | Wednesday 18 February 2026 04:18:19 +0000 (0:00:29.943) 0:02:33.357 **** 2026-02-18 04:18:19.168372 | orchestrator | =============================================================================== 2026-02-18 04:18:19.168383 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.94s 2026-02-18 04:18:19.168393 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.27s 2026-02-18 04:18:19.168404 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.67s 2026-02-18 04:18:19.168423 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.38s 2026-02-18 04:18:19.614281 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.10s 2026-02-18 04:18:19.614394 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.07s 2026-02-18 04:18:19.614408 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.98s 2026-02-18 04:18:19.614420 | orchestrator | glance : Copying over config.json files for services -------------------- 3.86s 2026-02-18 04:18:19.614431 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.72s 2026-02-18 04:18:19.614442 | orchestrator | glance : Check glance containers ---------------------------------------- 3.59s 2026-02-18 04:18:19.614477 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.56s 2026-02-18 04:18:19.614513 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.46s 2026-02-18 04:18:19.614524 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.41s 2026-02-18 04:18:19.614535 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.38s 2026-02-18 04:18:19.614605 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.35s 2026-02-18 04:18:19.614618 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.33s 2026-02-18 04:18:19.614629 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.32s 2026-02-18 04:18:19.614640 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.24s 2026-02-18 04:18:19.614651 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.24s 2026-02-18 04:18:19.614661 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.16s 2026-02-18 04:18:22.060130 | orchestrator | 2026-02-18 04:18:22 | INFO  | Task d1f74fae-4b74-469a-a7ff-7b614332b8a6 (cinder) was prepared for execution. 2026-02-18 04:18:22.060232 | orchestrator | 2026-02-18 04:18:22 | INFO  | It takes a moment until task d1f74fae-4b74-469a-a7ff-7b614332b8a6 (cinder) has been started and output is visible here. 2026-02-18 04:18:58.218599 | orchestrator | 2026-02-18 04:18:58.218899 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:18:58.218924 | orchestrator | 2026-02-18 04:18:58.218936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:18:58.218947 | orchestrator | Wednesday 18 February 2026 04:18:26 +0000 (0:00:00.254) 0:00:00.254 **** 2026-02-18 04:18:58.218958 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:18:58.218970 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:18:58.218980 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:18:58.218991 | orchestrator | 2026-02-18 04:18:58.219002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:18:58.219012 | orchestrator | Wednesday 18 February 2026 04:18:26 +0000 (0:00:00.315) 0:00:00.570 **** 2026-02-18 04:18:58.219023 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-18 04:18:58.219034 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-18 04:18:58.219045 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-18 04:18:58.219055 | orchestrator | 2026-02-18 04:18:58.219066 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-18 04:18:58.219077 | orchestrator | 2026-02-18 04:18:58.219087 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-18 04:18:58.219098 | orchestrator | Wednesday 18 February 2026 04:18:27 +0000 (0:00:00.446) 0:00:01.016 **** 2026-02-18 04:18:58.219108 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:18:58.219121 | orchestrator | 2026-02-18 04:18:58.219134 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-18 04:18:58.219146 | orchestrator | Wednesday 18 February 2026 04:18:27 +0000 (0:00:00.541) 0:00:01.558 **** 2026-02-18 04:18:58.219159 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-18 04:18:58.219172 | orchestrator | 2026-02-18 04:18:58.219186 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-18 04:18:58.219199 | orchestrator | Wednesday 18 February 2026 04:18:31 +0000 (0:00:04.021) 0:00:05.579 **** 2026-02-18 04:18:58.219211 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-18 04:18:58.219224 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-18 04:18:58.219236 | orchestrator | 2026-02-18 04:18:58.219249 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-18 04:18:58.219287 | orchestrator | Wednesday 18 February 2026 04:18:38 +0000 (0:00:06.717) 0:00:12.297 **** 2026-02-18 04:18:58.219300 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:18:58.219313 | orchestrator | 2026-02-18 04:18:58.219326 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-18 04:18:58.219338 | orchestrator | Wednesday 18 February 2026 04:18:41 +0000 (0:00:03.293) 0:00:15.590 **** 2026-02-18 04:18:58.219350 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:18:58.219362 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-18 04:18:58.219374 | orchestrator | 2026-02-18 04:18:58.219386 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-18 04:18:58.219398 | orchestrator | Wednesday 18 February 2026 04:18:45 +0000 (0:00:04.106) 0:00:19.696 **** 2026-02-18 04:18:58.219410 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:18:58.219422 | orchestrator | 2026-02-18 04:18:58.219434 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-18 04:18:58.219447 | orchestrator | Wednesday 18 February 2026 04:18:48 +0000 (0:00:02.919) 0:00:22.616 **** 2026-02-18 04:18:58.219459 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-18 04:18:58.219471 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-18 04:18:58.219482 | orchestrator | 2026-02-18 04:18:58.219493 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-18 04:18:58.219503 | orchestrator | Wednesday 18 February 2026 04:18:56 +0000 (0:00:07.399) 0:00:30.016 **** 2026-02-18 04:18:58.219533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:18:58.219571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:18:58.219584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:18:58.219605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:18:58.219618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:18:58.219669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:18:58.219683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:18:58.219703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:03.966742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:03.966893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:03.966911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:03.966938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:03.966953 | orchestrator | 2026-02-18 04:19:03.966975 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-18 04:19:03.966996 | orchestrator | Wednesday 18 February 2026 04:18:58 +0000 (0:00:02.106) 0:00:32.122 **** 2026-02-18 04:19:03.967013 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:19:03.967031 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:19:03.967048 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:19:03.967067 | orchestrator | 2026-02-18 04:19:03.967086 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-18 04:19:03.967104 | orchestrator | Wednesday 18 February 2026 04:18:58 +0000 (0:00:00.480) 0:00:32.603 **** 2026-02-18 04:19:03.967124 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:19:03.967143 | orchestrator | 2026-02-18 04:19:03.967161 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-18 04:19:03.967176 | orchestrator | Wednesday 18 February 2026 04:18:59 +0000 (0:00:00.531) 0:00:33.134 **** 2026-02-18 04:19:03.967188 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-18 04:19:03.967202 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-18 04:19:03.967214 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-18 04:19:03.967233 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-18 04:19:03.967265 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-18 04:19:03.967286 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-18 04:19:03.967304 | orchestrator | 2026-02-18 04:19:03.967323 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-18 04:19:03.967342 | orchestrator | Wednesday 18 February 2026 04:19:00 +0000 (0:00:01.592) 0:00:34.726 **** 2026-02-18 04:19:03.967390 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-18 04:19:03.967414 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-18 04:19:03.967446 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-18 04:19:03.967468 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-18 04:19:03.967500 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-18 04:19:14.517186 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-18 04:19:14.517344 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-18 04:19:14.517399 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-18 04:19:14.517436 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-18 04:19:14.517460 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-18 04:19:14.517518 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-18 04:19:14.517531 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-18 04:19:14.517543 | orchestrator | 2026-02-18 04:19:14.517556 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-18 04:19:14.517568 | orchestrator | Wednesday 18 February 2026 04:19:04 +0000 (0:00:03.365) 0:00:38.092 **** 2026-02-18 04:19:14.517580 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:19:14.517597 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:19:14.517615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-18 04:19:14.517643 | orchestrator | 2026-02-18 04:19:14.517696 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-18 04:19:14.517717 | orchestrator | Wednesday 18 February 2026 04:19:05 +0000 (0:00:01.508) 0:00:39.600 **** 2026-02-18 04:19:14.517735 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-18 04:19:14.517753 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-18 04:19:14.517771 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-18 04:19:14.517791 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-18 04:19:14.517811 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-18 04:19:14.517840 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-18 04:19:14.517858 | orchestrator | 2026-02-18 04:19:14.517876 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-18 04:19:14.517894 | orchestrator | Wednesday 18 February 2026 04:19:08 +0000 (0:00:02.661) 0:00:42.262 **** 2026-02-18 04:19:14.517911 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-18 04:19:14.517929 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-18 04:19:14.517962 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-18 04:19:14.517980 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-18 04:19:14.517998 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-18 04:19:14.518089 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-18 04:19:14.518119 | orchestrator | 2026-02-18 04:19:14.518139 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-18 04:19:14.518160 | orchestrator | Wednesday 18 February 2026 04:19:09 +0000 (0:00:00.988) 0:00:43.250 **** 2026-02-18 04:19:14.518178 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:19:14.518201 | orchestrator | 2026-02-18 04:19:14.518220 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-18 04:19:14.518239 | orchestrator | Wednesday 18 February 2026 04:19:09 +0000 (0:00:00.128) 0:00:43.379 **** 2026-02-18 04:19:14.518257 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:19:14.518277 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:19:14.518294 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:19:14.518313 | orchestrator | 2026-02-18 04:19:14.518333 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-18 04:19:14.518351 | orchestrator | Wednesday 18 February 2026 04:19:10 +0000 (0:00:00.489) 0:00:43.869 **** 2026-02-18 04:19:14.518371 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:19:14.518390 | orchestrator | 2026-02-18 04:19:14.518410 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-18 04:19:14.518427 | orchestrator | Wednesday 18 February 2026 04:19:10 +0000 (0:00:00.551) 0:00:44.420 **** 2026-02-18 04:19:14.518470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:15.404086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:15.404211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:15.404252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:15.404398 | orchestrator | 2026-02-18 04:19:15.404411 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-18 04:19:15.404423 | orchestrator | Wednesday 18 February 2026 04:19:14 +0000 (0:00:03.990) 0:00:48.410 **** 2026-02-18 04:19:15.404445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:15.519600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519826 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:19:15.519840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:15.519853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519921 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:19:15.519933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:15.519945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:15.519985 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:19:15.519996 | orchestrator | 2026-02-18 04:19:15.520008 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-18 04:19:15.520028 | orchestrator | Wednesday 18 February 2026 04:19:15 +0000 (0:00:00.915) 0:00:49.326 **** 2026-02-18 04:19:16.064038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:16.064141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:16.064159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:16.064174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:16.064186 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:19:16.064216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:16.064287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:16.064308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:16.064320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:16.064331 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:19:16.064343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:16.064355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:16.064383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:20.705646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:20.705769 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:19:20.705780 | orchestrator | 2026-02-18 04:19:20.705798 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-18 04:19:20.705805 | orchestrator | Wednesday 18 February 2026 04:19:16 +0000 (0:00:00.845) 0:00:50.172 **** 2026-02-18 04:19:20.705813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:20.705822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:20.705828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:20.705873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:20.705882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:20.705892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:20.705899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:20.705906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:20.705912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:20.705926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.216870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.216992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.217009 | orchestrator | 2026-02-18 04:19:33.217023 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-18 04:19:33.217036 | orchestrator | Wednesday 18 February 2026 04:19:20 +0000 (0:00:04.445) 0:00:54.617 **** 2026-02-18 04:19:33.217047 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-18 04:19:33.217059 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-18 04:19:33.217070 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-18 04:19:33.217081 | orchestrator | 2026-02-18 04:19:33.217091 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-18 04:19:33.217102 | orchestrator | Wednesday 18 February 2026 04:19:22 +0000 (0:00:01.863) 0:00:56.481 **** 2026-02-18 04:19:33.217114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:33.217151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:33.217190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:33.217204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.217216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.217227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.217246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.217258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:33.217278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:35.612960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:35.613062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:35.613077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:35.613111 | orchestrator | 2026-02-18 04:19:35.613122 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-18 04:19:35.613133 | orchestrator | Wednesday 18 February 2026 04:19:33 +0000 (0:00:10.635) 0:01:07.117 **** 2026-02-18 04:19:35.613141 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:19:35.613151 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:19:35.613159 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:19:35.613168 | orchestrator | 2026-02-18 04:19:35.613176 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-18 04:19:35.613184 | orchestrator | Wednesday 18 February 2026 04:19:34 +0000 (0:00:01.505) 0:01:08.623 **** 2026-02-18 04:19:35.613193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:35.613204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:35.613235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:35.613245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:35.613271 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:19:35.613281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:35.613291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:35.613300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:35.613322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:39.166712 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:19:39.166897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-18 04:19:39.166940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:19:39.166955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 04:19:39.166967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 04:19:39.166979 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:19:39.166991 | orchestrator | 2026-02-18 04:19:39.167003 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-18 04:19:39.167016 | orchestrator | Wednesday 18 February 2026 04:19:35 +0000 (0:00:00.884) 0:01:09.508 **** 2026-02-18 04:19:39.167027 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:19:39.167038 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:19:39.167048 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:19:39.167059 | orchestrator | 2026-02-18 04:19:39.167069 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-18 04:19:39.167080 | orchestrator | Wednesday 18 February 2026 04:19:36 +0000 (0:00:00.530) 0:01:10.038 **** 2026-02-18 04:19:39.167124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:39.167147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:39.167158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-18 04:19:39.167170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:39.167183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:39.167199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:19:39.167219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:11.632316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:11.632425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:11.632445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:11.632457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:11.632485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:11.632521 | orchestrator | 2026-02-18 04:21:11.632534 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-18 04:21:11.632547 | orchestrator | Wednesday 18 February 2026 04:19:39 +0000 (0:00:03.027) 0:01:13.065 **** 2026-02-18 04:21:11.632558 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:21:11.632570 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:21:11.632580 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:21:11.632591 | orchestrator | 2026-02-18 04:21:11.632602 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-18 04:21:11.632612 | orchestrator | Wednesday 18 February 2026 04:19:39 +0000 (0:00:00.289) 0:01:13.354 **** 2026-02-18 04:21:11.632623 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:21:11.632634 | orchestrator | 2026-02-18 04:21:11.632662 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-18 04:21:11.632674 | orchestrator | Wednesday 18 February 2026 04:19:41 +0000 (0:00:02.160) 0:01:15.515 **** 2026-02-18 04:21:11.632685 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:21:11.632696 | orchestrator | 2026-02-18 04:21:11.632706 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-18 04:21:11.632717 | orchestrator | Wednesday 18 February 2026 04:19:44 +0000 (0:00:02.394) 0:01:17.910 **** 2026-02-18 04:21:11.632727 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:21:11.632738 | orchestrator | 2026-02-18 04:21:11.632748 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-18 04:21:11.632759 | orchestrator | Wednesday 18 February 2026 04:20:04 +0000 (0:00:20.133) 0:01:38.043 **** 2026-02-18 04:21:11.632769 | orchestrator | 2026-02-18 04:21:11.632780 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-18 04:21:11.632790 | orchestrator | Wednesday 18 February 2026 04:20:04 +0000 (0:00:00.068) 0:01:38.112 **** 2026-02-18 04:21:11.632801 | orchestrator | 2026-02-18 04:21:11.632811 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-18 04:21:11.632822 | orchestrator | Wednesday 18 February 2026 04:20:04 +0000 (0:00:00.069) 0:01:38.181 **** 2026-02-18 04:21:11.632832 | orchestrator | 2026-02-18 04:21:11.632843 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-18 04:21:11.632855 | orchestrator | Wednesday 18 February 2026 04:20:04 +0000 (0:00:00.070) 0:01:38.252 **** 2026-02-18 04:21:11.632867 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:21:11.632880 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:21:11.632893 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:21:11.632936 | orchestrator | 2026-02-18 04:21:11.632950 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-18 04:21:11.632962 | orchestrator | Wednesday 18 February 2026 04:20:30 +0000 (0:00:25.790) 0:02:04.043 **** 2026-02-18 04:21:11.632975 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:21:11.632987 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:21:11.632999 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:21:11.633012 | orchestrator | 2026-02-18 04:21:11.633024 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-18 04:21:11.633036 | orchestrator | Wednesday 18 February 2026 04:20:38 +0000 (0:00:08.193) 0:02:12.237 **** 2026-02-18 04:21:11.633049 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:21:11.633061 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:21:11.633072 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:21:11.633084 | orchestrator | 2026-02-18 04:21:11.633097 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-18 04:21:11.633109 | orchestrator | Wednesday 18 February 2026 04:21:05 +0000 (0:00:26.868) 0:02:39.105 **** 2026-02-18 04:21:11.633121 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:21:11.633134 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:21:11.633146 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:21:11.633167 | orchestrator | 2026-02-18 04:21:11.633178 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-18 04:21:11.633189 | orchestrator | Wednesday 18 February 2026 04:21:11 +0000 (0:00:06.062) 0:02:45.167 **** 2026-02-18 04:21:11.633199 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:21:11.633210 | orchestrator | 2026-02-18 04:21:11.633220 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:21:11.633232 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-18 04:21:11.633245 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:21:11.633255 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:21:11.633266 | orchestrator | 2026-02-18 04:21:11.633276 | orchestrator | 2026-02-18 04:21:11.633287 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:21:11.633298 | orchestrator | Wednesday 18 February 2026 04:21:11 +0000 (0:00:00.256) 0:02:45.424 **** 2026-02-18 04:21:11.633309 | orchestrator | =============================================================================== 2026-02-18 04:21:11.633319 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.87s 2026-02-18 04:21:11.633330 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.79s 2026-02-18 04:21:11.633340 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.13s 2026-02-18 04:21:11.633351 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.64s 2026-02-18 04:21:11.633368 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.19s 2026-02-18 04:21:11.633379 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.40s 2026-02-18 04:21:11.633389 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.72s 2026-02-18 04:21:11.633399 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.06s 2026-02-18 04:21:11.633410 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.45s 2026-02-18 04:21:11.633420 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.11s 2026-02-18 04:21:11.633431 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.02s 2026-02-18 04:21:11.633441 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.99s 2026-02-18 04:21:11.633452 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.37s 2026-02-18 04:21:11.633462 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.29s 2026-02-18 04:21:11.633480 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.03s 2026-02-18 04:21:12.018848 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.92s 2026-02-18 04:21:12.019028 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.66s 2026-02-18 04:21:12.019050 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.39s 2026-02-18 04:21:12.019061 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.16s 2026-02-18 04:21:12.019073 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.11s 2026-02-18 04:21:14.499310 | orchestrator | 2026-02-18 04:21:14 | INFO  | Task 9bbb0e1e-e485-45d7-9221-a8862fa34534 (barbican) was prepared for execution. 2026-02-18 04:21:14.499422 | orchestrator | 2026-02-18 04:21:14 | INFO  | It takes a moment until task 9bbb0e1e-e485-45d7-9221-a8862fa34534 (barbican) has been started and output is visible here. 2026-02-18 04:21:59.890139 | orchestrator | 2026-02-18 04:21:59.890242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:21:59.890277 | orchestrator | 2026-02-18 04:21:59.890287 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:21:59.890296 | orchestrator | Wednesday 18 February 2026 04:21:18 +0000 (0:00:00.254) 0:00:00.254 **** 2026-02-18 04:21:59.890305 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:21:59.890315 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:21:59.890323 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:21:59.890332 | orchestrator | 2026-02-18 04:21:59.890341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:21:59.890350 | orchestrator | Wednesday 18 February 2026 04:21:18 +0000 (0:00:00.309) 0:00:00.564 **** 2026-02-18 04:21:59.890359 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-18 04:21:59.890368 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-18 04:21:59.890376 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-18 04:21:59.890385 | orchestrator | 2026-02-18 04:21:59.890393 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-18 04:21:59.890402 | orchestrator | 2026-02-18 04:21:59.890410 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-18 04:21:59.890419 | orchestrator | Wednesday 18 February 2026 04:21:19 +0000 (0:00:00.517) 0:00:01.081 **** 2026-02-18 04:21:59.890428 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:21:59.890437 | orchestrator | 2026-02-18 04:21:59.890446 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-18 04:21:59.890454 | orchestrator | Wednesday 18 February 2026 04:21:20 +0000 (0:00:00.612) 0:00:01.694 **** 2026-02-18 04:21:59.890463 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-18 04:21:59.890472 | orchestrator | 2026-02-18 04:21:59.890480 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-18 04:21:59.890489 | orchestrator | Wednesday 18 February 2026 04:21:23 +0000 (0:00:03.551) 0:00:05.246 **** 2026-02-18 04:21:59.890497 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-18 04:21:59.890506 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-18 04:21:59.890514 | orchestrator | 2026-02-18 04:21:59.890523 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-18 04:21:59.890531 | orchestrator | Wednesday 18 February 2026 04:21:30 +0000 (0:00:06.681) 0:00:11.927 **** 2026-02-18 04:21:59.890539 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:21:59.890548 | orchestrator | 2026-02-18 04:21:59.890557 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-18 04:21:59.890565 | orchestrator | Wednesday 18 February 2026 04:21:33 +0000 (0:00:03.489) 0:00:15.417 **** 2026-02-18 04:21:59.890574 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:21:59.890582 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-18 04:21:59.890590 | orchestrator | 2026-02-18 04:21:59.890599 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-18 04:21:59.890609 | orchestrator | Wednesday 18 February 2026 04:21:37 +0000 (0:00:04.122) 0:00:19.539 **** 2026-02-18 04:21:59.890619 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:21:59.890629 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-18 04:21:59.890639 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-18 04:21:59.890661 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-18 04:21:59.890671 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-18 04:21:59.890681 | orchestrator | 2026-02-18 04:21:59.890691 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-18 04:21:59.890701 | orchestrator | Wednesday 18 February 2026 04:21:54 +0000 (0:00:16.322) 0:00:35.862 **** 2026-02-18 04:21:59.890720 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-18 04:21:59.890730 | orchestrator | 2026-02-18 04:21:59.890740 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-18 04:21:59.890750 | orchestrator | Wednesday 18 February 2026 04:21:58 +0000 (0:00:04.000) 0:00:39.862 **** 2026-02-18 04:21:59.890764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:21:59.890794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:21:59.890806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:21:59.890817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:59.890833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:59.890849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:21:59.890866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:05.714144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:05.714268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:05.714286 | orchestrator | 2026-02-18 04:22:05.714300 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-18 04:22:05.714312 | orchestrator | Wednesday 18 February 2026 04:21:59 +0000 (0:00:01.618) 0:00:41.481 **** 2026-02-18 04:22:05.714324 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-18 04:22:05.714335 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-18 04:22:05.714346 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-18 04:22:05.714356 | orchestrator | 2026-02-18 04:22:05.714367 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-18 04:22:05.714378 | orchestrator | Wednesday 18 February 2026 04:22:00 +0000 (0:00:01.127) 0:00:42.608 **** 2026-02-18 04:22:05.714389 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:22:05.714400 | orchestrator | 2026-02-18 04:22:05.714411 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-18 04:22:05.714443 | orchestrator | Wednesday 18 February 2026 04:22:01 +0000 (0:00:00.306) 0:00:42.915 **** 2026-02-18 04:22:05.714455 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:22:05.714466 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:22:05.714476 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:22:05.714487 | orchestrator | 2026-02-18 04:22:05.714497 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-18 04:22:05.714508 | orchestrator | Wednesday 18 February 2026 04:22:01 +0000 (0:00:00.317) 0:00:43.233 **** 2026-02-18 04:22:05.714532 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:22:05.714544 | orchestrator | 2026-02-18 04:22:05.714554 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-18 04:22:05.714565 | orchestrator | Wednesday 18 February 2026 04:22:02 +0000 (0:00:00.537) 0:00:43.770 **** 2026-02-18 04:22:05.714580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:05.714613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:05.714628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:05.714642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:05.714670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:05.714684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:05.714697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:05.714720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:07.075761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:07.075863 | orchestrator | 2026-02-18 04:22:07.075880 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-18 04:22:07.075892 | orchestrator | Wednesday 18 February 2026 04:22:05 +0000 (0:00:03.537) 0:00:47.308 **** 2026-02-18 04:22:07.075929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:07.075957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:07.075970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:07.075981 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:22:07.075994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:07.076083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:07.076098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:07.076117 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:22:07.076134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:07.076146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:07.076157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:07.076168 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:22:07.076179 | orchestrator | 2026-02-18 04:22:07.076191 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-18 04:22:07.076202 | orchestrator | Wednesday 18 February 2026 04:22:06 +0000 (0:00:00.592) 0:00:47.900 **** 2026-02-18 04:22:07.076222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:10.624274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:10.624408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:10.624436 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:22:10.624479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:10.624513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:10.624535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:10.624556 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:22:10.624601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:10.624636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:10.624654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:10.624666 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:22:10.624677 | orchestrator | 2026-02-18 04:22:10.624688 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-18 04:22:10.624730 | orchestrator | Wednesday 18 February 2026 04:22:07 +0000 (0:00:00.779) 0:00:48.679 **** 2026-02-18 04:22:10.624743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:10.624756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:10.624784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:19.825253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:19.825408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:19.825439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:19.825461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:19.825481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:19.825515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:19.825528 | orchestrator | 2026-02-18 04:22:19.825541 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-18 04:22:19.825554 | orchestrator | Wednesday 18 February 2026 04:22:10 +0000 (0:00:03.541) 0:00:52.221 **** 2026-02-18 04:22:19.825565 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:22:19.825577 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:22:19.825587 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:22:19.825598 | orchestrator | 2026-02-18 04:22:19.825626 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-18 04:22:19.825638 | orchestrator | Wednesday 18 February 2026 04:22:12 +0000 (0:00:01.464) 0:00:53.686 **** 2026-02-18 04:22:19.825649 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:22:19.825660 | orchestrator | 2026-02-18 04:22:19.825671 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-18 04:22:19.825681 | orchestrator | Wednesday 18 February 2026 04:22:12 +0000 (0:00:00.871) 0:00:54.557 **** 2026-02-18 04:22:19.825692 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:22:19.825703 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:22:19.825713 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:22:19.825723 | orchestrator | 2026-02-18 04:22:19.825734 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-18 04:22:19.825744 | orchestrator | Wednesday 18 February 2026 04:22:13 +0000 (0:00:00.543) 0:00:55.101 **** 2026-02-18 04:22:19.825811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:19.825837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:19.825862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:19.825885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:20.652673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:20.652773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:20.652787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:20.652814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:20.652823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:20.652832 | orchestrator | 2026-02-18 04:22:20.652841 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-18 04:22:20.652851 | orchestrator | Wednesday 18 February 2026 04:22:19 +0000 (0:00:06.319) 0:01:01.421 **** 2026-02-18 04:22:20.652874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:20.652888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:20.652898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:20.652906 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:22:20.652915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:20.652933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:20.652941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:20.652949 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:22:20.652965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-18 04:22:23.094374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:22:23.094480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:22:23.094520 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:22:23.094534 | orchestrator | 2026-02-18 04:22:23.094546 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-18 04:22:23.094558 | orchestrator | Wednesday 18 February 2026 04:22:20 +0000 (0:00:00.828) 0:01:02.250 **** 2026-02-18 04:22:23.094570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:23.094583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:23.094612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-18 04:22:23.094631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:23.094651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:23.094663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:23.094674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:23.094685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:23.094696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:22:23.094708 | orchestrator | 2026-02-18 04:22:23.094719 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-18 04:22:23.094736 | orchestrator | Wednesday 18 February 2026 04:22:23 +0000 (0:00:02.438) 0:01:04.689 **** 2026-02-18 04:23:01.948188 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:23:01.948304 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:23:01.948320 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:23:01.948334 | orchestrator | 2026-02-18 04:23:01.948363 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-18 04:23:01.948399 | orchestrator | Wednesday 18 February 2026 04:22:23 +0000 (0:00:00.290) 0:01:04.980 **** 2026-02-18 04:23:01.948411 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:23:01.948422 | orchestrator | 2026-02-18 04:23:01.948433 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-18 04:23:01.948444 | orchestrator | Wednesday 18 February 2026 04:22:25 +0000 (0:00:02.204) 0:01:07.184 **** 2026-02-18 04:23:01.948455 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:23:01.948465 | orchestrator | 2026-02-18 04:23:01.948476 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-18 04:23:01.948487 | orchestrator | Wednesday 18 February 2026 04:22:27 +0000 (0:00:02.288) 0:01:09.473 **** 2026-02-18 04:23:01.948498 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:23:01.948508 | orchestrator | 2026-02-18 04:23:01.948519 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-18 04:23:01.948530 | orchestrator | Wednesday 18 February 2026 04:22:40 +0000 (0:00:12.657) 0:01:22.130 **** 2026-02-18 04:23:01.948540 | orchestrator | 2026-02-18 04:23:01.948551 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-18 04:23:01.948562 | orchestrator | Wednesday 18 February 2026 04:22:40 +0000 (0:00:00.067) 0:01:22.197 **** 2026-02-18 04:23:01.948572 | orchestrator | 2026-02-18 04:23:01.948583 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-18 04:23:01.948594 | orchestrator | Wednesday 18 February 2026 04:22:40 +0000 (0:00:00.067) 0:01:22.264 **** 2026-02-18 04:23:01.948604 | orchestrator | 2026-02-18 04:23:01.948615 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-18 04:23:01.948626 | orchestrator | Wednesday 18 February 2026 04:22:40 +0000 (0:00:00.068) 0:01:22.333 **** 2026-02-18 04:23:01.948636 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:23:01.948647 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:23:01.948657 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:23:01.948668 | orchestrator | 2026-02-18 04:23:01.948681 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-18 04:23:01.948693 | orchestrator | Wednesday 18 February 2026 04:22:46 +0000 (0:00:06.029) 0:01:28.362 **** 2026-02-18 04:23:01.948705 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:23:01.948719 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:23:01.948732 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:23:01.948745 | orchestrator | 2026-02-18 04:23:01.948757 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-18 04:23:01.948770 | orchestrator | Wednesday 18 February 2026 04:22:56 +0000 (0:00:09.661) 0:01:38.024 **** 2026-02-18 04:23:01.948783 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:23:01.948796 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:23:01.948808 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:23:01.948820 | orchestrator | 2026-02-18 04:23:01.948832 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:23:01.948845 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:23:01.948859 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 04:23:01.948872 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 04:23:01.948884 | orchestrator | 2026-02-18 04:23:01.948896 | orchestrator | 2026-02-18 04:23:01.948909 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:23:01.948921 | orchestrator | Wednesday 18 February 2026 04:23:01 +0000 (0:00:05.202) 0:01:43.227 **** 2026-02-18 04:23:01.948934 | orchestrator | =============================================================================== 2026-02-18 04:23:01.948946 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.32s 2026-02-18 04:23:01.948968 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.66s 2026-02-18 04:23:01.948981 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.66s 2026-02-18 04:23:01.948993 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.68s 2026-02-18 04:23:01.949005 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.32s 2026-02-18 04:23:01.949016 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.03s 2026-02-18 04:23:01.949026 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.20s 2026-02-18 04:23:01.949037 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.12s 2026-02-18 04:23:01.949047 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.00s 2026-02-18 04:23:01.949058 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.55s 2026-02-18 04:23:01.949069 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.54s 2026-02-18 04:23:01.949079 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.54s 2026-02-18 04:23:01.949090 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.49s 2026-02-18 04:23:01.949101 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.44s 2026-02-18 04:23:01.949131 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.29s 2026-02-18 04:23:01.949161 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.20s 2026-02-18 04:23:01.949172 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.62s 2026-02-18 04:23:01.949188 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.46s 2026-02-18 04:23:01.949200 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.13s 2026-02-18 04:23:01.949210 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.87s 2026-02-18 04:23:04.232585 | orchestrator | 2026-02-18 04:23:04 | INFO  | Task 9174751b-9809-48fa-b7bd-675cfa934434 (designate) was prepared for execution. 2026-02-18 04:23:04.232655 | orchestrator | 2026-02-18 04:23:04 | INFO  | It takes a moment until task 9174751b-9809-48fa-b7bd-675cfa934434 (designate) has been started and output is visible here. 2026-02-18 04:23:37.175679 | orchestrator | 2026-02-18 04:23:37.175803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:23:37.175822 | orchestrator | 2026-02-18 04:23:37.175834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:23:37.175846 | orchestrator | Wednesday 18 February 2026 04:23:08 +0000 (0:00:00.253) 0:00:00.253 **** 2026-02-18 04:23:37.175857 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:23:37.175869 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:23:37.175880 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:23:37.175890 | orchestrator | 2026-02-18 04:23:37.175901 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:23:37.175912 | orchestrator | Wednesday 18 February 2026 04:23:08 +0000 (0:00:00.306) 0:00:00.559 **** 2026-02-18 04:23:37.175924 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-18 04:23:37.175935 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-18 04:23:37.175945 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-18 04:23:37.175956 | orchestrator | 2026-02-18 04:23:37.175967 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-18 04:23:37.175977 | orchestrator | 2026-02-18 04:23:37.175988 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-18 04:23:37.175999 | orchestrator | Wednesday 18 February 2026 04:23:09 +0000 (0:00:00.436) 0:00:00.996 **** 2026-02-18 04:23:37.176010 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:23:37.176047 | orchestrator | 2026-02-18 04:23:37.176058 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-18 04:23:37.176069 | orchestrator | Wednesday 18 February 2026 04:23:09 +0000 (0:00:00.546) 0:00:01.542 **** 2026-02-18 04:23:37.176079 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-18 04:23:37.176090 | orchestrator | 2026-02-18 04:23:37.176100 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-18 04:23:37.176111 | orchestrator | Wednesday 18 February 2026 04:23:13 +0000 (0:00:03.627) 0:00:05.170 **** 2026-02-18 04:23:37.176122 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-18 04:23:37.176133 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-18 04:23:37.176143 | orchestrator | 2026-02-18 04:23:37.176154 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-18 04:23:37.176164 | orchestrator | Wednesday 18 February 2026 04:23:20 +0000 (0:00:06.815) 0:00:11.986 **** 2026-02-18 04:23:37.176199 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:23:37.176213 | orchestrator | 2026-02-18 04:23:37.176226 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-18 04:23:37.176238 | orchestrator | Wednesday 18 February 2026 04:23:23 +0000 (0:00:03.346) 0:00:15.332 **** 2026-02-18 04:23:37.176250 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:23:37.176262 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-18 04:23:37.176273 | orchestrator | 2026-02-18 04:23:37.176285 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-18 04:23:37.176297 | orchestrator | Wednesday 18 February 2026 04:23:27 +0000 (0:00:04.283) 0:00:19.616 **** 2026-02-18 04:23:37.176309 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:23:37.176322 | orchestrator | 2026-02-18 04:23:37.176334 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-18 04:23:37.176347 | orchestrator | Wednesday 18 February 2026 04:23:31 +0000 (0:00:03.426) 0:00:23.043 **** 2026-02-18 04:23:37.176359 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-18 04:23:37.176371 | orchestrator | 2026-02-18 04:23:37.176383 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-18 04:23:37.176396 | orchestrator | Wednesday 18 February 2026 04:23:35 +0000 (0:00:04.013) 0:00:27.056 **** 2026-02-18 04:23:37.176426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:37.176464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:37.176489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:37.176503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:37.176516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:37.176528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:37.176544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:37.176565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:43.497441 | orchestrator | 2026-02-18 04:23:43.497453 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-18 04:23:43.497464 | orchestrator | Wednesday 18 February 2026 04:23:37 +0000 (0:00:02.767) 0:00:29.823 **** 2026-02-18 04:23:43.497474 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:23:43.497485 | orchestrator | 2026-02-18 04:23:43.497495 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-18 04:23:43.497504 | orchestrator | Wednesday 18 February 2026 04:23:38 +0000 (0:00:00.134) 0:00:29.958 **** 2026-02-18 04:23:43.497514 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:23:43.497523 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:23:43.497534 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:23:43.497544 | orchestrator | 2026-02-18 04:23:43.497553 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-18 04:23:43.497563 | orchestrator | Wednesday 18 February 2026 04:23:38 +0000 (0:00:00.503) 0:00:30.461 **** 2026-02-18 04:23:43.497573 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:23:43.497582 | orchestrator | 2026-02-18 04:23:43.497592 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-18 04:23:43.497609 | orchestrator | Wednesday 18 February 2026 04:23:39 +0000 (0:00:00.566) 0:00:31.028 **** 2026-02-18 04:23:43.497625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:43.497646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:45.352170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:45.352269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:45.352381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:46.244294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:46.244398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:46.244413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:46.244449 | orchestrator | 2026-02-18 04:23:46.244462 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-18 04:23:46.244490 | orchestrator | Wednesday 18 February 2026 04:23:45 +0000 (0:00:06.224) 0:00:37.253 **** 2026-02-18 04:23:46.244529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:23:46.244544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:23:46.244576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.244590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.244602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.244614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.244635 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:23:46.244653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:23:46.244665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:23:46.244676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.244695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989447 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:23:46.989476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:23:46.989490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:23:46.989503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989576 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:23:46.989587 | orchestrator | 2026-02-18 04:23:46.989598 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-18 04:23:46.989611 | orchestrator | Wednesday 18 February 2026 04:23:46 +0000 (0:00:00.996) 0:00:38.250 **** 2026-02-18 04:23:46.989628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:23:46.989640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:23:46.989651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:23:46.989669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344627 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:23:47.344659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:23:47.344683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:23:47.344697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344805 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:23:47.344830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:23:47.344848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:23:47.344865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:23:47.344919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:23:51.517578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:23:51.517676 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:23:51.517692 | orchestrator | 2026-02-18 04:23:51.517702 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-18 04:23:51.517712 | orchestrator | Wednesday 18 February 2026 04:23:47 +0000 (0:00:00.992) 0:00:39.242 **** 2026-02-18 04:23:51.517738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:51.517750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:51.517760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:23:51.517803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:51.517815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:51.517828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:23:51.517838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:51.517848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:51.517857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:51.517873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:23:51.517890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.951943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952267 | orchestrator | 2026-02-18 04:24:02.952294 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-18 04:24:02.952314 | orchestrator | Wednesday 18 February 2026 04:23:53 +0000 (0:00:06.022) 0:00:45.264 **** 2026-02-18 04:24:02.952335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:24:02.952349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:24:02.952371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:24:02.952383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:02.952408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:10.962701 | orchestrator | 2026-02-18 04:24:10.962710 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-18 04:24:10.962719 | orchestrator | Wednesday 18 February 2026 04:24:07 +0000 (0:00:14.002) 0:00:59.267 **** 2026-02-18 04:24:10.962732 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-18 04:24:15.162327 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-18 04:24:15.162465 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-18 04:24:15.162492 | orchestrator | 2026-02-18 04:24:15.162506 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-18 04:24:15.162517 | orchestrator | Wednesday 18 February 2026 04:24:10 +0000 (0:00:03.595) 0:01:02.862 **** 2026-02-18 04:24:15.162528 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-18 04:24:15.162539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-18 04:24:15.162550 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-18 04:24:15.162561 | orchestrator | 2026-02-18 04:24:15.162572 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-18 04:24:15.162601 | orchestrator | Wednesday 18 February 2026 04:24:13 +0000 (0:00:02.382) 0:01:05.245 **** 2026-02-18 04:24:15.162615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:15.162657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:15.162670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:15.162702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:15.162717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:15.162735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:15.162757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:15.162769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:15.162780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:15.162792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:15.162812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:17.932819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:17.932934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:17.932947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:17.932956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:17.932964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:17.932973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:17.932996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:17.933012 | orchestrator | 2026-02-18 04:24:17.933022 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-18 04:24:17.933032 | orchestrator | Wednesday 18 February 2026 04:24:16 +0000 (0:00:02.902) 0:01:08.147 **** 2026-02-18 04:24:17.933045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:17.933057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:17.933065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:17.933073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:17.933088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:18.879782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:18.879845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:18.879866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:18.879873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:18.879880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:18.879891 | orchestrator | 2026-02-18 04:24:18.879899 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-18 04:24:18.879911 | orchestrator | Wednesday 18 February 2026 04:24:18 +0000 (0:00:02.626) 0:01:10.774 **** 2026-02-18 04:24:19.835322 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:24:19.835492 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:24:19.835509 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:24:19.835521 | orchestrator | 2026-02-18 04:24:19.835534 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-18 04:24:19.835546 | orchestrator | Wednesday 18 February 2026 04:24:19 +0000 (0:00:00.329) 0:01:11.103 **** 2026-02-18 04:24:19.835580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:19.835597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:24:19.835611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:19.835624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:19.835659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:19.835689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:24:19.835702 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:24:19.835719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:19.835732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:24:19.835743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:19.835754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:19.835776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:19.835796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:24:23.208029 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:24:23.208120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-18 04:24:23.208130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 04:24:23.208141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 04:24:23.208150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 04:24:23.208182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 04:24:23.208191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:24:23.208200 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:24:23.208209 | orchestrator | 2026-02-18 04:24:23.208228 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-18 04:24:23.208235 | orchestrator | Wednesday 18 February 2026 04:24:19 +0000 (0:00:00.746) 0:01:11.850 **** 2026-02-18 04:24:23.208243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:24:23.208307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:24:23.208313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-18 04:24:23.208324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:23.208333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:25.034901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:24:25.035206 | orchestrator | 2026-02-18 04:24:25.035219 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-18 04:24:25.035231 | orchestrator | Wednesday 18 February 2026 04:24:24 +0000 (0:00:04.762) 0:01:16.612 **** 2026-02-18 04:24:25.035243 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:24:25.035323 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:25:45.627214 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:25:45.627332 | orchestrator | 2026-02-18 04:25:45.627347 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-18 04:25:45.627448 | orchestrator | Wednesday 18 February 2026 04:24:25 +0000 (0:00:00.316) 0:01:16.929 **** 2026-02-18 04:25:45.627473 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-18 04:25:45.627492 | orchestrator | 2026-02-18 04:25:45.627503 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-18 04:25:45.627514 | orchestrator | Wednesday 18 February 2026 04:24:27 +0000 (0:00:02.304) 0:01:19.234 **** 2026-02-18 04:25:45.627525 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-18 04:25:45.627536 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-18 04:25:45.627547 | orchestrator | 2026-02-18 04:25:45.627558 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-18 04:25:45.627569 | orchestrator | Wednesday 18 February 2026 04:24:29 +0000 (0:00:02.452) 0:01:21.686 **** 2026-02-18 04:25:45.627580 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.627591 | orchestrator | 2026-02-18 04:25:45.627602 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-18 04:25:45.627612 | orchestrator | Wednesday 18 February 2026 04:24:45 +0000 (0:00:16.203) 0:01:37.889 **** 2026-02-18 04:25:45.627623 | orchestrator | 2026-02-18 04:25:45.627634 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-18 04:25:45.627645 | orchestrator | Wednesday 18 February 2026 04:24:46 +0000 (0:00:00.069) 0:01:37.959 **** 2026-02-18 04:25:45.627655 | orchestrator | 2026-02-18 04:25:45.627690 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-18 04:25:45.627701 | orchestrator | Wednesday 18 February 2026 04:24:46 +0000 (0:00:00.070) 0:01:38.029 **** 2026-02-18 04:25:45.627714 | orchestrator | 2026-02-18 04:25:45.627726 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-18 04:25:45.627738 | orchestrator | Wednesday 18 February 2026 04:24:46 +0000 (0:00:00.071) 0:01:38.101 **** 2026-02-18 04:25:45.627752 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:25:45.627765 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:25:45.627777 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.627789 | orchestrator | 2026-02-18 04:25:45.627801 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-18 04:25:45.627814 | orchestrator | Wednesday 18 February 2026 04:24:55 +0000 (0:00:08.885) 0:01:46.987 **** 2026-02-18 04:25:45.627825 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.627837 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:25:45.627849 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:25:45.627912 | orchestrator | 2026-02-18 04:25:45.627923 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-18 04:25:45.627934 | orchestrator | Wednesday 18 February 2026 04:25:05 +0000 (0:00:10.754) 0:01:57.741 **** 2026-02-18 04:25:45.627945 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.627955 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:25:45.627988 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:25:45.628000 | orchestrator | 2026-02-18 04:25:45.628022 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-18 04:25:45.628033 | orchestrator | Wednesday 18 February 2026 04:25:16 +0000 (0:00:10.368) 0:02:08.110 **** 2026-02-18 04:25:45.628056 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.628067 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:25:45.628078 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:25:45.628088 | orchestrator | 2026-02-18 04:25:45.628099 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-18 04:25:45.628110 | orchestrator | Wednesday 18 February 2026 04:25:21 +0000 (0:00:05.392) 0:02:13.502 **** 2026-02-18 04:25:45.628120 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.628131 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:25:45.628142 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:25:45.628153 | orchestrator | 2026-02-18 04:25:45.628163 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-18 04:25:45.628174 | orchestrator | Wednesday 18 February 2026 04:25:27 +0000 (0:00:05.767) 0:02:19.269 **** 2026-02-18 04:25:45.628185 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.628196 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:25:45.628206 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:25:45.628217 | orchestrator | 2026-02-18 04:25:45.628228 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-18 04:25:45.628239 | orchestrator | Wednesday 18 February 2026 04:25:38 +0000 (0:00:10.722) 0:02:29.992 **** 2026-02-18 04:25:45.628249 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:25:45.628260 | orchestrator | 2026-02-18 04:25:45.628271 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:25:45.628283 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:25:45.628295 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 04:25:45.628306 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 04:25:45.628316 | orchestrator | 2026-02-18 04:25:45.628327 | orchestrator | 2026-02-18 04:25:45.628338 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:25:45.628357 | orchestrator | Wednesday 18 February 2026 04:25:45 +0000 (0:00:07.165) 0:02:37.157 **** 2026-02-18 04:25:45.628368 | orchestrator | =============================================================================== 2026-02-18 04:25:45.628396 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.20s 2026-02-18 04:25:45.628407 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.00s 2026-02-18 04:25:45.628436 | orchestrator | designate : Restart designate-api container ---------------------------- 10.75s 2026-02-18 04:25:45.628447 | orchestrator | designate : Restart designate-worker container ------------------------- 10.72s 2026-02-18 04:25:45.628465 | orchestrator | designate : Restart designate-central container ------------------------ 10.37s 2026-02-18 04:25:45.628477 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.89s 2026-02-18 04:25:45.628488 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.17s 2026-02-18 04:25:45.628498 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.82s 2026-02-18 04:25:45.628509 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.22s 2026-02-18 04:25:45.628520 | orchestrator | designate : Copying over config.json files for services ----------------- 6.02s 2026-02-18 04:25:45.628530 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.77s 2026-02-18 04:25:45.628541 | orchestrator | designate : Restart designate-producer container ------------------------ 5.39s 2026-02-18 04:25:45.628551 | orchestrator | designate : Check designate containers ---------------------------------- 4.76s 2026-02-18 04:25:45.628562 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.28s 2026-02-18 04:25:45.628572 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.01s 2026-02-18 04:25:45.628583 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.63s 2026-02-18 04:25:45.628593 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.60s 2026-02-18 04:25:45.628604 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.43s 2026-02-18 04:25:45.628614 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.35s 2026-02-18 04:25:45.628625 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.90s 2026-02-18 04:25:47.916980 | orchestrator | 2026-02-18 04:25:47 | INFO  | Task 9d4d07e0-6ee9-4afb-abc3-e537fc9499b5 (octavia) was prepared for execution. 2026-02-18 04:25:47.917075 | orchestrator | 2026-02-18 04:25:47 | INFO  | It takes a moment until task 9d4d07e0-6ee9-4afb-abc3-e537fc9499b5 (octavia) has been started and output is visible here. 2026-02-18 04:27:56.055105 | orchestrator | 2026-02-18 04:27:56.055243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:27:56.055259 | orchestrator | 2026-02-18 04:27:56.055271 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:27:56.055283 | orchestrator | Wednesday 18 February 2026 04:25:51 +0000 (0:00:00.226) 0:00:00.226 **** 2026-02-18 04:27:56.055294 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:27:56.055306 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:27:56.055317 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:27:56.055328 | orchestrator | 2026-02-18 04:27:56.055339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:27:56.055350 | orchestrator | Wednesday 18 February 2026 04:25:52 +0000 (0:00:00.219) 0:00:00.446 **** 2026-02-18 04:27:56.055361 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-18 04:27:56.055372 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-18 04:27:56.055383 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-18 04:27:56.055393 | orchestrator | 2026-02-18 04:27:56.055405 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-18 04:27:56.055417 | orchestrator | 2026-02-18 04:27:56.055428 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-18 04:27:56.055467 | orchestrator | Wednesday 18 February 2026 04:25:52 +0000 (0:00:00.327) 0:00:00.773 **** 2026-02-18 04:27:56.055479 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:27:56.055491 | orchestrator | 2026-02-18 04:27:56.055502 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-18 04:27:56.055512 | orchestrator | Wednesday 18 February 2026 04:25:52 +0000 (0:00:00.416) 0:00:01.189 **** 2026-02-18 04:27:56.055524 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-18 04:27:56.055535 | orchestrator | 2026-02-18 04:27:56.055546 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-18 04:27:56.055632 | orchestrator | Wednesday 18 February 2026 04:25:56 +0000 (0:00:03.495) 0:00:04.684 **** 2026-02-18 04:27:56.055645 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-18 04:27:56.055659 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-18 04:27:56.055672 | orchestrator | 2026-02-18 04:27:56.055685 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-18 04:27:56.055696 | orchestrator | Wednesday 18 February 2026 04:26:03 +0000 (0:00:06.787) 0:00:11.472 **** 2026-02-18 04:27:56.055706 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:27:56.055718 | orchestrator | 2026-02-18 04:27:56.055728 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-18 04:27:56.055739 | orchestrator | Wednesday 18 February 2026 04:26:06 +0000 (0:00:03.279) 0:00:14.752 **** 2026-02-18 04:27:56.055750 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:27:56.055761 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-18 04:27:56.055772 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-18 04:27:56.055782 | orchestrator | 2026-02-18 04:27:56.055793 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-18 04:27:56.055804 | orchestrator | Wednesday 18 February 2026 04:26:15 +0000 (0:00:08.653) 0:00:23.405 **** 2026-02-18 04:27:56.055815 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:27:56.055826 | orchestrator | 2026-02-18 04:27:56.055836 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-18 04:27:56.055863 | orchestrator | Wednesday 18 February 2026 04:26:18 +0000 (0:00:03.369) 0:00:26.774 **** 2026-02-18 04:27:56.055875 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-18 04:27:56.055885 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-18 04:27:56.055896 | orchestrator | 2026-02-18 04:27:56.055907 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-18 04:27:56.055917 | orchestrator | Wednesday 18 February 2026 04:26:25 +0000 (0:00:07.304) 0:00:34.078 **** 2026-02-18 04:27:56.055928 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-18 04:27:56.055939 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-18 04:27:56.055949 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-18 04:27:56.055960 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-18 04:27:56.055970 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-18 04:27:56.055981 | orchestrator | 2026-02-18 04:27:56.055992 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-18 04:27:56.056003 | orchestrator | Wednesday 18 February 2026 04:26:41 +0000 (0:00:15.629) 0:00:49.708 **** 2026-02-18 04:27:56.056013 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:27:56.056024 | orchestrator | 2026-02-18 04:27:56.056035 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-18 04:27:56.056054 | orchestrator | Wednesday 18 February 2026 04:26:42 +0000 (0:00:00.754) 0:00:50.463 **** 2026-02-18 04:27:56.056065 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056076 | orchestrator | 2026-02-18 04:27:56.056087 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-18 04:27:56.056097 | orchestrator | Wednesday 18 February 2026 04:26:47 +0000 (0:00:04.898) 0:00:55.361 **** 2026-02-18 04:27:56.056108 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056119 | orchestrator | 2026-02-18 04:27:56.056130 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-18 04:27:56.056160 | orchestrator | Wednesday 18 February 2026 04:26:51 +0000 (0:00:04.035) 0:00:59.396 **** 2026-02-18 04:27:56.056171 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:27:56.056182 | orchestrator | 2026-02-18 04:27:56.056193 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-18 04:27:56.056203 | orchestrator | Wednesday 18 February 2026 04:26:54 +0000 (0:00:03.307) 0:01:02.704 **** 2026-02-18 04:27:56.056214 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-18 04:27:56.056225 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-18 04:27:56.056235 | orchestrator | 2026-02-18 04:27:56.056246 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-18 04:27:56.056257 | orchestrator | Wednesday 18 February 2026 04:27:04 +0000 (0:00:10.169) 0:01:12.873 **** 2026-02-18 04:27:56.056267 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-18 04:27:56.056278 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-18 04:27:56.056290 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-18 04:27:56.056301 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-18 04:27:56.056312 | orchestrator | 2026-02-18 04:27:56.056323 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-18 04:27:56.056333 | orchestrator | Wednesday 18 February 2026 04:27:22 +0000 (0:00:17.858) 0:01:30.732 **** 2026-02-18 04:27:56.056348 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056359 | orchestrator | 2026-02-18 04:27:56.056370 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-18 04:27:56.056381 | orchestrator | Wednesday 18 February 2026 04:27:27 +0000 (0:00:04.739) 0:01:35.472 **** 2026-02-18 04:27:56.056391 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056402 | orchestrator | 2026-02-18 04:27:56.056413 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-18 04:27:56.056424 | orchestrator | Wednesday 18 February 2026 04:27:32 +0000 (0:00:05.586) 0:01:41.059 **** 2026-02-18 04:27:56.056434 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:27:56.056445 | orchestrator | 2026-02-18 04:27:56.056456 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-18 04:27:56.056467 | orchestrator | Wednesday 18 February 2026 04:27:33 +0000 (0:00:00.215) 0:01:41.274 **** 2026-02-18 04:27:56.056477 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:27:56.056488 | orchestrator | 2026-02-18 04:27:56.056499 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-18 04:27:56.056510 | orchestrator | Wednesday 18 February 2026 04:27:37 +0000 (0:00:04.425) 0:01:45.700 **** 2026-02-18 04:27:56.056521 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:27:56.056531 | orchestrator | 2026-02-18 04:27:56.056542 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-18 04:27:56.056574 | orchestrator | Wednesday 18 February 2026 04:27:38 +0000 (0:00:01.126) 0:01:46.826 **** 2026-02-18 04:27:56.056592 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:27:56.056603 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056614 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:27:56.056625 | orchestrator | 2026-02-18 04:27:56.056636 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-18 04:27:56.056653 | orchestrator | Wednesday 18 February 2026 04:27:43 +0000 (0:00:05.362) 0:01:52.188 **** 2026-02-18 04:27:56.056663 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056674 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:27:56.056698 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:27:56.056709 | orchestrator | 2026-02-18 04:27:56.056731 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-18 04:27:56.056742 | orchestrator | Wednesday 18 February 2026 04:27:48 +0000 (0:00:04.307) 0:01:56.496 **** 2026-02-18 04:27:56.056753 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056764 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:27:56.056775 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:27:56.056785 | orchestrator | 2026-02-18 04:27:56.056796 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-18 04:27:56.056807 | orchestrator | Wednesday 18 February 2026 04:27:49 +0000 (0:00:01.012) 0:01:57.509 **** 2026-02-18 04:27:56.056818 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:27:56.056829 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:27:56.056839 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:27:56.056850 | orchestrator | 2026-02-18 04:27:56.056861 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-18 04:27:56.056872 | orchestrator | Wednesday 18 February 2026 04:27:51 +0000 (0:00:02.085) 0:01:59.594 **** 2026-02-18 04:27:56.056883 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:27:56.056894 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:27:56.056904 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.056915 | orchestrator | 2026-02-18 04:27:56.057015 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-18 04:27:56.057027 | orchestrator | Wednesday 18 February 2026 04:27:52 +0000 (0:00:01.210) 0:02:00.805 **** 2026-02-18 04:27:56.057038 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:27:56.057049 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.057059 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:27:56.057070 | orchestrator | 2026-02-18 04:27:56.057081 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-18 04:27:56.057092 | orchestrator | Wednesday 18 February 2026 04:27:53 +0000 (0:00:01.174) 0:02:01.980 **** 2026-02-18 04:27:56.057103 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:27:56.057113 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:27:56.057124 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:27:56.057135 | orchestrator | 2026-02-18 04:27:56.057154 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-18 04:28:21.557464 | orchestrator | Wednesday 18 February 2026 04:27:56 +0000 (0:00:02.291) 0:02:04.271 **** 2026-02-18 04:28:21.557578 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:28:21.557694 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:28:21.557713 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:28:21.557731 | orchestrator | 2026-02-18 04:28:21.557752 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-18 04:28:21.557771 | orchestrator | Wednesday 18 February 2026 04:27:57 +0000 (0:00:01.435) 0:02:05.707 **** 2026-02-18 04:28:21.557789 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:28:21.557807 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:28:21.557824 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:28:21.557839 | orchestrator | 2026-02-18 04:28:21.557855 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-18 04:28:21.557871 | orchestrator | Wednesday 18 February 2026 04:27:58 +0000 (0:00:00.633) 0:02:06.341 **** 2026-02-18 04:28:21.557887 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:28:21.557933 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:28:21.557950 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:28:21.557965 | orchestrator | 2026-02-18 04:28:21.557982 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-18 04:28:21.557999 | orchestrator | Wednesday 18 February 2026 04:28:01 +0000 (0:00:03.184) 0:02:09.525 **** 2026-02-18 04:28:21.558114 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:28:21.558141 | orchestrator | 2026-02-18 04:28:21.558158 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-18 04:28:21.558172 | orchestrator | Wednesday 18 February 2026 04:28:01 +0000 (0:00:00.534) 0:02:10.060 **** 2026-02-18 04:28:21.558185 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:28:21.558199 | orchestrator | 2026-02-18 04:28:21.558214 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-18 04:28:21.558227 | orchestrator | Wednesday 18 February 2026 04:28:05 +0000 (0:00:03.380) 0:02:13.441 **** 2026-02-18 04:28:21.558241 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:28:21.558255 | orchestrator | 2026-02-18 04:28:21.558269 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-18 04:28:21.558284 | orchestrator | Wednesday 18 February 2026 04:28:08 +0000 (0:00:03.273) 0:02:16.714 **** 2026-02-18 04:28:21.558298 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-18 04:28:21.558314 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-18 04:28:21.558330 | orchestrator | 2026-02-18 04:28:21.558345 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-18 04:28:21.558359 | orchestrator | Wednesday 18 February 2026 04:28:15 +0000 (0:00:07.073) 0:02:23.788 **** 2026-02-18 04:28:21.558373 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:28:21.558387 | orchestrator | 2026-02-18 04:28:21.558400 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-18 04:28:21.558415 | orchestrator | Wednesday 18 February 2026 04:28:19 +0000 (0:00:03.461) 0:02:27.250 **** 2026-02-18 04:28:21.558429 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:28:21.558443 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:28:21.558457 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:28:21.558472 | orchestrator | 2026-02-18 04:28:21.558487 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-18 04:28:21.558501 | orchestrator | Wednesday 18 February 2026 04:28:19 +0000 (0:00:00.478) 0:02:27.729 **** 2026-02-18 04:28:21.558560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:21.558627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:21.558658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:21.558674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:21.558689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:21.558711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:21.558727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:21.558767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:21.558802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:22.959383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:22.959526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:22.959555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:22.959673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:22.959696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:22.959745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:22.959768 | orchestrator | 2026-02-18 04:28:22.959788 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-18 04:28:22.959809 | orchestrator | Wednesday 18 February 2026 04:28:21 +0000 (0:00:02.461) 0:02:30.191 **** 2026-02-18 04:28:22.959827 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:28:22.959846 | orchestrator | 2026-02-18 04:28:22.959865 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-18 04:28:22.959884 | orchestrator | Wednesday 18 February 2026 04:28:22 +0000 (0:00:00.135) 0:02:30.326 **** 2026-02-18 04:28:22.959903 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:28:22.959945 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:28:22.959966 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:28:22.959984 | orchestrator | 2026-02-18 04:28:22.960004 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-18 04:28:22.960024 | orchestrator | Wednesday 18 February 2026 04:28:22 +0000 (0:00:00.315) 0:02:30.641 **** 2026-02-18 04:28:22.960046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:22.960066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:22.960089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:22.960104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:22.960129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:22.960141 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:28:22.960167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:27.703001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:27.703084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:27.703107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:27.703116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:27.703141 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:28:27.703149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:27.703158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:27.703177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:27.703184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:27.703194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:27.703207 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:28:27.703214 | orchestrator | 2026-02-18 04:28:27.703221 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-18 04:28:27.703229 | orchestrator | Wednesday 18 February 2026 04:28:23 +0000 (0:00:00.628) 0:02:31.270 **** 2026-02-18 04:28:27.703235 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:28:27.703242 | orchestrator | 2026-02-18 04:28:27.703248 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-18 04:28:27.703254 | orchestrator | Wednesday 18 February 2026 04:28:23 +0000 (0:00:00.700) 0:02:31.970 **** 2026-02-18 04:28:27.703261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:27.703269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:27.703281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:29.232684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:29.232829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:29.232846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:29.232859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:29.232997 | orchestrator | 2026-02-18 04:28:29.233010 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-18 04:28:29.233022 | orchestrator | Wednesday 18 February 2026 04:28:28 +0000 (0:00:04.941) 0:02:36.912 **** 2026-02-18 04:28:29.233043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:29.341818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:29.341931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:29.341947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:29.341960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:29.341972 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:28:29.341985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:29.341998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:29.342100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:29.342120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:29.342132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:29.342143 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:28:29.342154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:29.342166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:29.342177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:29.342213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:30.095251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:30.095360 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:28:30.095378 | orchestrator | 2026-02-18 04:28:30.095391 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-18 04:28:30.095403 | orchestrator | Wednesday 18 February 2026 04:28:29 +0000 (0:00:00.657) 0:02:37.570 **** 2026-02-18 04:28:30.095416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:30.095429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:30.095441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:30.095454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:30.095508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:30.095521 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:28:30.095539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:30.095550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:30.095562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:30.095573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:30.095670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:30.095693 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:28:30.095721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 04:28:34.744051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 04:28:34.744171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 04:28:34.744191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 04:28:34.744205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 04:28:34.744339 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:28:34.744355 | orchestrator | 2026-02-18 04:28:34.744368 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-18 04:28:34.744380 | orchestrator | Wednesday 18 February 2026 04:28:30 +0000 (0:00:01.215) 0:02:38.786 **** 2026-02-18 04:28:34.744393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:34.744438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:34.744453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:34.744465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:34.744476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:34.744496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:28:34.744508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:34.744531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:28:50.078457 | orchestrator | 2026-02-18 04:28:50.078470 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-18 04:28:50.078483 | orchestrator | Wednesday 18 February 2026 04:28:35 +0000 (0:00:05.154) 0:02:43.941 **** 2026-02-18 04:28:50.078494 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-18 04:28:50.078506 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-18 04:28:50.078517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-18 04:28:50.078528 | orchestrator | 2026-02-18 04:28:50.078538 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-18 04:28:50.078549 | orchestrator | Wednesday 18 February 2026 04:28:37 +0000 (0:00:01.581) 0:02:45.522 **** 2026-02-18 04:28:50.078562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:50.078584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:50.078596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:28:50.078662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:29:04.994104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:29:04.994221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:29:04.994239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:29:04.994417 | orchestrator | 2026-02-18 04:29:04.994430 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-18 04:29:04.994443 | orchestrator | Wednesday 18 February 2026 04:28:53 +0000 (0:00:15.963) 0:03:01.486 **** 2026-02-18 04:29:04.994454 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:29:04.994465 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:29:04.994477 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:29:04.994488 | orchestrator | 2026-02-18 04:29:04.994505 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-18 04:29:04.994525 | orchestrator | Wednesday 18 February 2026 04:28:55 +0000 (0:00:01.769) 0:03:03.255 **** 2026-02-18 04:29:04.994545 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-18 04:29:04.994568 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-18 04:29:04.994588 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-18 04:29:04.994602 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-18 04:29:04.994614 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-18 04:29:04.994627 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-18 04:29:04.994668 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-18 04:29:04.994683 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-18 04:29:04.994695 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-18 04:29:04.994708 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-18 04:29:04.994720 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-18 04:29:04.994732 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-18 04:29:04.994745 | orchestrator | 2026-02-18 04:29:04.994757 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-18 04:29:04.994776 | orchestrator | Wednesday 18 February 2026 04:28:59 +0000 (0:00:04.889) 0:03:08.145 **** 2026-02-18 04:29:04.994789 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-18 04:29:04.994801 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-18 04:29:04.994822 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-18 04:29:13.231332 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-18 04:29:13.231442 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-18 04:29:13.231458 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-18 04:29:13.231470 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-18 04:29:13.231481 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-18 04:29:13.231492 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-18 04:29:13.231503 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-18 04:29:13.231513 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-18 04:29:13.231524 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-18 04:29:13.231535 | orchestrator | 2026-02-18 04:29:13.231547 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-18 04:29:13.231559 | orchestrator | Wednesday 18 February 2026 04:29:04 +0000 (0:00:05.072) 0:03:13.218 **** 2026-02-18 04:29:13.231570 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-18 04:29:13.231580 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-18 04:29:13.231591 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-18 04:29:13.231602 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-18 04:29:13.231612 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-18 04:29:13.231623 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-18 04:29:13.231634 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-18 04:29:13.231644 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-18 04:29:13.231689 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-18 04:29:13.231701 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-18 04:29:13.231712 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-18 04:29:13.231723 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-18 04:29:13.231733 | orchestrator | 2026-02-18 04:29:13.231744 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-18 04:29:13.231755 | orchestrator | Wednesday 18 February 2026 04:29:10 +0000 (0:00:05.079) 0:03:18.297 **** 2026-02-18 04:29:13.231770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:29:13.231785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:29:13.231871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 04:29:13.231887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:29:13.231900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:29:13.231911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-18 04:29:13.231924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:13.231936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:13.231964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-18 04:29:13.231984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:30:40.619900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:30:40.620008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-18 04:30:40.620024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:30:40.620037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:30:40.620076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-18 04:30:40.620089 | orchestrator | 2026-02-18 04:30:40.620103 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-18 04:30:40.620116 | orchestrator | Wednesday 18 February 2026 04:29:14 +0000 (0:00:04.042) 0:03:22.339 **** 2026-02-18 04:30:40.620127 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:30:40.620139 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:30:40.620150 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:30:40.620161 | orchestrator | 2026-02-18 04:30:40.620186 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-18 04:30:40.620198 | orchestrator | Wednesday 18 February 2026 04:29:14 +0000 (0:00:00.492) 0:03:22.831 **** 2026-02-18 04:30:40.620209 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620219 | orchestrator | 2026-02-18 04:30:40.620230 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-18 04:30:40.620240 | orchestrator | Wednesday 18 February 2026 04:29:16 +0000 (0:00:02.180) 0:03:25.011 **** 2026-02-18 04:30:40.620251 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620261 | orchestrator | 2026-02-18 04:30:40.620272 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-18 04:30:40.620283 | orchestrator | Wednesday 18 February 2026 04:29:19 +0000 (0:00:02.329) 0:03:27.341 **** 2026-02-18 04:30:40.620293 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620304 | orchestrator | 2026-02-18 04:30:40.620315 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-18 04:30:40.620327 | orchestrator | Wednesday 18 February 2026 04:29:21 +0000 (0:00:02.482) 0:03:29.824 **** 2026-02-18 04:30:40.620355 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620367 | orchestrator | 2026-02-18 04:30:40.620377 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-18 04:30:40.620388 | orchestrator | Wednesday 18 February 2026 04:29:24 +0000 (0:00:02.501) 0:03:32.326 **** 2026-02-18 04:30:40.620399 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620409 | orchestrator | 2026-02-18 04:30:40.620420 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-18 04:30:40.620430 | orchestrator | Wednesday 18 February 2026 04:29:47 +0000 (0:00:23.216) 0:03:55.542 **** 2026-02-18 04:30:40.620441 | orchestrator | 2026-02-18 04:30:40.620451 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-18 04:30:40.620462 | orchestrator | Wednesday 18 February 2026 04:29:47 +0000 (0:00:00.068) 0:03:55.610 **** 2026-02-18 04:30:40.620472 | orchestrator | 2026-02-18 04:30:40.620483 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-18 04:30:40.620494 | orchestrator | Wednesday 18 February 2026 04:29:47 +0000 (0:00:00.068) 0:03:55.679 **** 2026-02-18 04:30:40.620504 | orchestrator | 2026-02-18 04:30:40.620514 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-18 04:30:40.620525 | orchestrator | Wednesday 18 February 2026 04:29:47 +0000 (0:00:00.066) 0:03:55.746 **** 2026-02-18 04:30:40.620536 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620546 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:30:40.620557 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:30:40.620567 | orchestrator | 2026-02-18 04:30:40.620578 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-18 04:30:40.620589 | orchestrator | Wednesday 18 February 2026 04:30:04 +0000 (0:00:17.159) 0:04:12.905 **** 2026-02-18 04:30:40.620608 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620618 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:30:40.620629 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:30:40.620640 | orchestrator | 2026-02-18 04:30:40.620651 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-18 04:30:40.620662 | orchestrator | Wednesday 18 February 2026 04:30:15 +0000 (0:00:11.119) 0:04:24.025 **** 2026-02-18 04:30:40.620673 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620683 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:30:40.620694 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:30:40.620705 | orchestrator | 2026-02-18 04:30:40.620716 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-18 04:30:40.620726 | orchestrator | Wednesday 18 February 2026 04:30:21 +0000 (0:00:05.380) 0:04:29.406 **** 2026-02-18 04:30:40.620737 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:30:40.620748 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:30:40.620783 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620796 | orchestrator | 2026-02-18 04:30:40.620807 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-18 04:30:40.620818 | orchestrator | Wednesday 18 February 2026 04:30:29 +0000 (0:00:08.309) 0:04:37.716 **** 2026-02-18 04:30:40.620829 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:30:40.620840 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:30:40.620850 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:30:40.620861 | orchestrator | 2026-02-18 04:30:40.620872 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:30:40.620884 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:30:40.620896 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:30:40.620907 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:30:40.620918 | orchestrator | 2026-02-18 04:30:40.620929 | orchestrator | 2026-02-18 04:30:40.620941 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:30:40.620951 | orchestrator | Wednesday 18 February 2026 04:30:40 +0000 (0:00:11.103) 0:04:48.820 **** 2026-02-18 04:30:40.620962 | orchestrator | =============================================================================== 2026-02-18 04:30:40.620973 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.22s 2026-02-18 04:30:40.620983 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.86s 2026-02-18 04:30:40.620994 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.16s 2026-02-18 04:30:40.621005 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.96s 2026-02-18 04:30:40.621015 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.63s 2026-02-18 04:30:40.621033 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.12s 2026-02-18 04:30:40.621044 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.10s 2026-02-18 04:30:40.621055 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.17s 2026-02-18 04:30:40.621065 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.65s 2026-02-18 04:30:40.621076 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.31s 2026-02-18 04:30:40.621087 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.30s 2026-02-18 04:30:40.621097 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.07s 2026-02-18 04:30:40.621108 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.79s 2026-02-18 04:30:40.621126 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.59s 2026-02-18 04:30:40.621144 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.38s 2026-02-18 04:30:40.918556 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.36s 2026-02-18 04:30:40.918663 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.15s 2026-02-18 04:30:40.918677 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.08s 2026-02-18 04:30:40.918688 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.07s 2026-02-18 04:30:40.918699 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.94s 2026-02-18 04:30:43.191849 | orchestrator | 2026-02-18 04:30:43 | INFO  | Task f842e19a-d807-47f1-8047-c9c534203742 (ceilometer) was prepared for execution. 2026-02-18 04:30:43.191947 | orchestrator | 2026-02-18 04:30:43 | INFO  | It takes a moment until task f842e19a-d807-47f1-8047-c9c534203742 (ceilometer) has been started and output is visible here. 2026-02-18 04:31:06.599334 | orchestrator | 2026-02-18 04:31:06.599452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:31:06.599471 | orchestrator | 2026-02-18 04:31:06.599483 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:31:06.599494 | orchestrator | Wednesday 18 February 2026 04:30:47 +0000 (0:00:00.266) 0:00:00.266 **** 2026-02-18 04:31:06.599505 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:31:06.599517 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:31:06.599529 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:31:06.599540 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:31:06.599551 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:31:06.599562 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:31:06.599573 | orchestrator | 2026-02-18 04:31:06.599584 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:31:06.599595 | orchestrator | Wednesday 18 February 2026 04:30:48 +0000 (0:00:00.701) 0:00:00.968 **** 2026-02-18 04:31:06.599606 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-18 04:31:06.599617 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-18 04:31:06.599628 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-18 04:31:06.599639 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-18 04:31:06.599649 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-18 04:31:06.599660 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-18 04:31:06.599675 | orchestrator | 2026-02-18 04:31:06.599695 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-18 04:31:06.599724 | orchestrator | 2026-02-18 04:31:06.599746 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-18 04:31:06.599765 | orchestrator | Wednesday 18 February 2026 04:30:48 +0000 (0:00:00.604) 0:00:01.572 **** 2026-02-18 04:31:06.599785 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:31:06.599852 | orchestrator | 2026-02-18 04:31:06.599872 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-18 04:31:06.599920 | orchestrator | Wednesday 18 February 2026 04:30:49 +0000 (0:00:01.206) 0:00:02.779 **** 2026-02-18 04:31:06.599942 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:06.599962 | orchestrator | 2026-02-18 04:31:06.599976 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-18 04:31:06.599989 | orchestrator | Wednesday 18 February 2026 04:30:50 +0000 (0:00:00.134) 0:00:02.914 **** 2026-02-18 04:31:06.600001 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:06.600014 | orchestrator | 2026-02-18 04:31:06.600027 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-18 04:31:06.600066 | orchestrator | Wednesday 18 February 2026 04:30:50 +0000 (0:00:00.135) 0:00:03.049 **** 2026-02-18 04:31:06.600080 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:31:06.600092 | orchestrator | 2026-02-18 04:31:06.600105 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-18 04:31:06.600117 | orchestrator | Wednesday 18 February 2026 04:30:53 +0000 (0:00:03.529) 0:00:06.578 **** 2026-02-18 04:31:06.600130 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:31:06.600143 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-18 04:31:06.600155 | orchestrator | 2026-02-18 04:31:06.600168 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-18 04:31:06.600180 | orchestrator | Wednesday 18 February 2026 04:30:57 +0000 (0:00:03.839) 0:00:10.418 **** 2026-02-18 04:31:06.600192 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:31:06.600205 | orchestrator | 2026-02-18 04:31:06.600218 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-18 04:31:06.600244 | orchestrator | Wednesday 18 February 2026 04:31:00 +0000 (0:00:03.361) 0:00:13.779 **** 2026-02-18 04:31:06.600256 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-18 04:31:06.600266 | orchestrator | 2026-02-18 04:31:06.600277 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-18 04:31:06.600287 | orchestrator | Wednesday 18 February 2026 04:31:04 +0000 (0:00:04.093) 0:00:17.873 **** 2026-02-18 04:31:06.600298 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:06.600309 | orchestrator | 2026-02-18 04:31:06.600319 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-18 04:31:06.600330 | orchestrator | Wednesday 18 February 2026 04:31:05 +0000 (0:00:00.138) 0:00:18.011 **** 2026-02-18 04:31:06.600344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:06.600379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:06.600393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:06.600405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:06.600490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:06.600504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:06.600515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:06.600535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:11.211238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:11.211346 | orchestrator | 2026-02-18 04:31:11.211363 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-18 04:31:11.211399 | orchestrator | Wednesday 18 February 2026 04:31:06 +0000 (0:00:01.456) 0:00:19.468 **** 2026-02-18 04:31:11.211411 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:31:11.211424 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 04:31:11.211434 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 04:31:11.211445 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:31:11.211456 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:31:11.211467 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:31:11.211478 | orchestrator | 2026-02-18 04:31:11.211488 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-18 04:31:11.211500 | orchestrator | Wednesday 18 February 2026 04:31:08 +0000 (0:00:01.589) 0:00:21.058 **** 2026-02-18 04:31:11.211511 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:31:11.211522 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:31:11.211533 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:31:11.211543 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:31:11.211554 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:31:11.211564 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:31:11.211575 | orchestrator | 2026-02-18 04:31:11.211586 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-18 04:31:11.211597 | orchestrator | Wednesday 18 February 2026 04:31:08 +0000 (0:00:00.589) 0:00:21.647 **** 2026-02-18 04:31:11.211608 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:11.211618 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:11.211629 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:11.211641 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:11.211651 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:11.211662 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:11.211673 | orchestrator | 2026-02-18 04:31:11.211683 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-18 04:31:11.211695 | orchestrator | Wednesday 18 February 2026 04:31:09 +0000 (0:00:00.766) 0:00:22.413 **** 2026-02-18 04:31:11.211706 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:31:11.211716 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:31:11.211727 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:31:11.211738 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:31:11.211748 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:31:11.211842 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:31:11.211859 | orchestrator | 2026-02-18 04:31:11.211872 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-18 04:31:11.211885 | orchestrator | Wednesday 18 February 2026 04:31:10 +0000 (0:00:00.614) 0:00:23.028 **** 2026-02-18 04:31:11.211904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:11.211919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:11.211941 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:11.211975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:11.211989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:11.212002 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:11.212015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:11.212028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:11.212046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:11.212059 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:11.212072 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:11.212085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:11.212104 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:11.212126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:15.740434 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:15.740556 | orchestrator | 2026-02-18 04:31:15.740574 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-18 04:31:15.740587 | orchestrator | Wednesday 18 February 2026 04:31:11 +0000 (0:00:01.053) 0:00:24.081 **** 2026-02-18 04:31:15.740601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:15.740617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:15.740630 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:15.740658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:15.740671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:15.740704 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:15.740716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:15.740728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:15.740739 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:15.740769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:15.740783 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:15.740794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:15.740852 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:15.740869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:15.740880 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:15.740891 | orchestrator | 2026-02-18 04:31:15.740905 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-18 04:31:15.740927 | orchestrator | Wednesday 18 February 2026 04:31:12 +0000 (0:00:00.833) 0:00:24.914 **** 2026-02-18 04:31:15.740940 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:31:15.740953 | orchestrator | 2026-02-18 04:31:15.740965 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-18 04:31:15.740978 | orchestrator | Wednesday 18 February 2026 04:31:12 +0000 (0:00:00.675) 0:00:25.590 **** 2026-02-18 04:31:15.740991 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:31:15.741004 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:31:15.741015 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:31:15.741028 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:31:15.741040 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:31:15.741052 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:31:15.741065 | orchestrator | 2026-02-18 04:31:15.741078 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-18 04:31:15.741090 | orchestrator | Wednesday 18 February 2026 04:31:13 +0000 (0:00:00.753) 0:00:26.344 **** 2026-02-18 04:31:15.741102 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:31:15.741114 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:31:15.741126 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:31:15.741139 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:31:15.741151 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:31:15.741163 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:31:15.741175 | orchestrator | 2026-02-18 04:31:15.741187 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-18 04:31:15.741200 | orchestrator | Wednesday 18 February 2026 04:31:14 +0000 (0:00:00.922) 0:00:27.267 **** 2026-02-18 04:31:15.741212 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:15.741225 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:15.741238 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:15.741250 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:15.741262 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:15.741273 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:15.741283 | orchestrator | 2026-02-18 04:31:15.741294 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-18 04:31:15.741305 | orchestrator | Wednesday 18 February 2026 04:31:15 +0000 (0:00:00.741) 0:00:28.009 **** 2026-02-18 04:31:15.741316 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:15.741326 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:15.741338 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:15.741349 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:15.741359 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:15.741370 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:15.741381 | orchestrator | 2026-02-18 04:31:20.551647 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-18 04:31:20.551829 | orchestrator | Wednesday 18 February 2026 04:31:15 +0000 (0:00:00.609) 0:00:28.618 **** 2026-02-18 04:31:20.551847 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:31:20.551861 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 04:31:20.551873 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 04:31:20.551884 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:31:20.551895 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:31:20.551906 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:31:20.551917 | orchestrator | 2026-02-18 04:31:20.551929 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-18 04:31:20.551940 | orchestrator | Wednesday 18 February 2026 04:31:17 +0000 (0:00:01.391) 0:00:30.010 **** 2026-02-18 04:31:20.551955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:20.552003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:20.552018 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:20.552078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:20.552091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:20.552102 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:20.552113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:20.552147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:20.552161 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:20.552176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:20.552199 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:20.552212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:20.552225 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:20.552244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:20.552257 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:20.552270 | orchestrator | 2026-02-18 04:31:20.552282 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-18 04:31:20.552295 | orchestrator | Wednesday 18 February 2026 04:31:17 +0000 (0:00:00.822) 0:00:30.832 **** 2026-02-18 04:31:20.552307 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:20.552319 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:20.552331 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:20.552343 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:20.552355 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:20.552367 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:20.552379 | orchestrator | 2026-02-18 04:31:20.552392 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-18 04:31:20.552403 | orchestrator | Wednesday 18 February 2026 04:31:18 +0000 (0:00:00.756) 0:00:31.588 **** 2026-02-18 04:31:20.552414 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:31:20.552425 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 04:31:20.552436 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 04:31:20.552446 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:31:20.552457 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:31:20.552468 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:31:20.552478 | orchestrator | 2026-02-18 04:31:20.552489 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-18 04:31:20.552499 | orchestrator | Wednesday 18 February 2026 04:31:20 +0000 (0:00:01.416) 0:00:33.005 **** 2026-02-18 04:31:20.552520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.213574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:26.213695 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:26.213722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.213759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:26.213781 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:26.213800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.213901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:26.213916 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:26.213928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.213966 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:26.213997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.214009 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:26.214061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.214073 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:26.214084 | orchestrator | 2026-02-18 04:31:26.214096 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-18 04:31:26.214110 | orchestrator | Wednesday 18 February 2026 04:31:21 +0000 (0:00:01.055) 0:00:34.060 **** 2026-02-18 04:31:26.214123 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:26.214135 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:26.214147 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:26.214159 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:26.214171 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:26.214190 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:26.214203 | orchestrator | 2026-02-18 04:31:26.214216 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-18 04:31:26.214228 | orchestrator | Wednesday 18 February 2026 04:31:21 +0000 (0:00:00.751) 0:00:34.811 **** 2026-02-18 04:31:26.214240 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:26.214252 | orchestrator | 2026-02-18 04:31:26.214264 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-18 04:31:26.214276 | orchestrator | Wednesday 18 February 2026 04:31:22 +0000 (0:00:00.136) 0:00:34.948 **** 2026-02-18 04:31:26.214288 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:26.214301 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:26.214314 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:26.214325 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:26.214338 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:26.214350 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:26.214362 | orchestrator | 2026-02-18 04:31:26.214374 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-18 04:31:26.214386 | orchestrator | Wednesday 18 February 2026 04:31:22 +0000 (0:00:00.570) 0:00:35.519 **** 2026-02-18 04:31:26.214409 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:31:26.214423 | orchestrator | 2026-02-18 04:31:26.214435 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-18 04:31:26.214447 | orchestrator | Wednesday 18 February 2026 04:31:23 +0000 (0:00:01.281) 0:00:36.800 **** 2026-02-18 04:31:26.214460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:26.214482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:26.731727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:26.731861 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:26.731894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:26.731905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:26.731936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:26.731947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:26.731974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:26.731986 | orchestrator | 2026-02-18 04:31:26.731997 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-18 04:31:26.732008 | orchestrator | Wednesday 18 February 2026 04:31:26 +0000 (0:00:02.284) 0:00:39.085 **** 2026-02-18 04:31:26.732019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.732035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:26.732053 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:26.732064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.732074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:26.732084 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:26.732094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:26.732111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:28.627792 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:28.627972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.627994 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:28.628024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.628056 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:28.628068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.628079 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:28.628090 | orchestrator | 2026-02-18 04:31:28.628102 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-18 04:31:28.628114 | orchestrator | Wednesday 18 February 2026 04:31:27 +0000 (0:00:00.837) 0:00:39.923 **** 2026-02-18 04:31:28.628126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.628138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:28.628170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.628183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:28.628211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.628241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:28.628261 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:28.628280 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:28.628301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.628323 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:28.628337 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:28.628352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:28.628364 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:28.628394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:35.784255 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:35.784392 | orchestrator | 2026-02-18 04:31:35.784440 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-18 04:31:35.784454 | orchestrator | Wednesday 18 February 2026 04:31:28 +0000 (0:00:01.573) 0:00:41.496 **** 2026-02-18 04:31:35.784483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:35.784627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:35.784646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:35.784665 | orchestrator | 2026-02-18 04:31:35.784682 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-18 04:31:35.784698 | orchestrator | Wednesday 18 February 2026 04:31:31 +0000 (0:00:02.534) 0:00:44.031 **** 2026-02-18 04:31:35.784716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:35.784768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.097898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.098012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.098080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.098093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:45.098107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:45.098119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:45.098156 | orchestrator | 2026-02-18 04:31:45.098169 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-18 04:31:45.098199 | orchestrator | Wednesday 18 February 2026 04:31:35 +0000 (0:00:04.626) 0:00:48.657 **** 2026-02-18 04:31:45.098212 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:31:45.098225 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 04:31:45.098235 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 04:31:45.098246 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:31:45.098257 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:31:45.098268 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:31:45.098279 | orchestrator | 2026-02-18 04:31:45.098290 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-18 04:31:45.098301 | orchestrator | Wednesday 18 February 2026 04:31:37 +0000 (0:00:01.416) 0:00:50.074 **** 2026-02-18 04:31:45.098312 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:45.098323 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:45.098333 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:45.098344 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:45.098362 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:45.098375 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:45.098388 | orchestrator | 2026-02-18 04:31:45.098400 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-18 04:31:45.098413 | orchestrator | Wednesday 18 February 2026 04:31:37 +0000 (0:00:00.599) 0:00:50.673 **** 2026-02-18 04:31:45.098426 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:45.098439 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:45.098452 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:45.098466 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:31:45.098478 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:31:45.098491 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:31:45.098504 | orchestrator | 2026-02-18 04:31:45.098516 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-18 04:31:45.098530 | orchestrator | Wednesday 18 February 2026 04:31:39 +0000 (0:00:01.581) 0:00:52.255 **** 2026-02-18 04:31:45.098542 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:45.098554 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:45.098567 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:45.098579 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:31:45.098591 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:31:45.098603 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:31:45.098616 | orchestrator | 2026-02-18 04:31:45.098629 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-18 04:31:45.098642 | orchestrator | Wednesday 18 February 2026 04:31:40 +0000 (0:00:01.476) 0:00:53.732 **** 2026-02-18 04:31:45.098655 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:31:45.098667 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 04:31:45.098680 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 04:31:45.098692 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:31:45.098705 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:31:45.098717 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:31:45.098729 | orchestrator | 2026-02-18 04:31:45.098740 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-18 04:31:45.098751 | orchestrator | Wednesday 18 February 2026 04:31:42 +0000 (0:00:01.559) 0:00:55.291 **** 2026-02-18 04:31:45.098771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.098784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.098803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.908529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.908683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.908713 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:31:45.908778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:45.908805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:45.908826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:31:45.908870 | orchestrator | 2026-02-18 04:31:45.908886 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-18 04:31:45.908901 | orchestrator | Wednesday 18 February 2026 04:31:45 +0000 (0:00:02.679) 0:00:57.970 **** 2026-02-18 04:31:45.908947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:45.908979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:45.909000 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:45.909024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:45.909060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:45.909080 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:45.909102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:45.909122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:45.909154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508088 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:47.508162 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:47.508184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508209 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:47.508215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508222 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:47.508228 | orchestrator | 2026-02-18 04:31:47.508234 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-18 04:31:47.508242 | orchestrator | Wednesday 18 February 2026 04:31:45 +0000 (0:00:00.817) 0:00:58.788 **** 2026-02-18 04:31:47.508247 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:47.508253 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:47.508259 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:47.508264 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:47.508270 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:47.508276 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:31:47.508281 | orchestrator | 2026-02-18 04:31:47.508287 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-18 04:31:47.508293 | orchestrator | Wednesday 18 February 2026 04:31:46 +0000 (0:00:00.763) 0:00:59.552 **** 2026-02-18 04:31:47.508300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:47.508315 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:31:47.508332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:47.508354 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:31:47.508360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 04:31:47.508372 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:31:47.508378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508384 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:31:47.508390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:31:47.508396 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:31:47.508405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-18 04:32:15.274503 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:32:15.274619 | orchestrator | 2026-02-18 04:32:15.274652 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-18 04:32:15.274665 | orchestrator | Wednesday 18 February 2026 04:31:47 +0000 (0:00:00.835) 0:01:00.387 **** 2026-02-18 04:32:15.274679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:15.274695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:15.274707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:15.274719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:15.274733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:15.274767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:15.274802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:32:15.274815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:32:15.274826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-18 04:32:15.274838 | orchestrator | 2026-02-18 04:32:15.274849 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-18 04:32:15.274860 | orchestrator | Wednesday 18 February 2026 04:31:49 +0000 (0:00:01.860) 0:01:02.247 **** 2026-02-18 04:32:15.274940 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:32:15.274956 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:32:15.274967 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:32:15.274978 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:32:15.274988 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:32:15.274999 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:32:15.275021 | orchestrator | 2026-02-18 04:32:15.275033 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-18 04:32:15.275043 | orchestrator | Wednesday 18 February 2026 04:31:49 +0000 (0:00:00.589) 0:01:02.837 **** 2026-02-18 04:32:15.275054 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:32:15.275064 | orchestrator | 2026-02-18 04:32:15.275075 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-18 04:32:15.275085 | orchestrator | Wednesday 18 February 2026 04:31:54 +0000 (0:00:05.004) 0:01:07.842 **** 2026-02-18 04:32:15.275096 | orchestrator | 2026-02-18 04:32:15.275106 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-18 04:32:15.275117 | orchestrator | Wednesday 18 February 2026 04:31:55 +0000 (0:00:00.071) 0:01:07.914 **** 2026-02-18 04:32:15.275127 | orchestrator | 2026-02-18 04:32:15.275147 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-18 04:32:15.275158 | orchestrator | Wednesday 18 February 2026 04:31:55 +0000 (0:00:00.070) 0:01:07.984 **** 2026-02-18 04:32:15.275169 | orchestrator | 2026-02-18 04:32:15.275179 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-18 04:32:15.275190 | orchestrator | Wednesday 18 February 2026 04:31:55 +0000 (0:00:00.269) 0:01:08.253 **** 2026-02-18 04:32:15.275201 | orchestrator | 2026-02-18 04:32:15.275212 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-18 04:32:15.275222 | orchestrator | Wednesday 18 February 2026 04:31:55 +0000 (0:00:00.075) 0:01:08.329 **** 2026-02-18 04:32:15.275232 | orchestrator | 2026-02-18 04:32:15.275243 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-18 04:32:15.275253 | orchestrator | Wednesday 18 February 2026 04:31:55 +0000 (0:00:00.068) 0:01:08.397 **** 2026-02-18 04:32:15.275264 | orchestrator | 2026-02-18 04:32:15.275274 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-18 04:32:15.275285 | orchestrator | Wednesday 18 February 2026 04:31:55 +0000 (0:00:00.071) 0:01:08.468 **** 2026-02-18 04:32:15.275295 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:32:15.275306 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:32:15.275316 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:32:15.275327 | orchestrator | 2026-02-18 04:32:15.275337 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-18 04:32:15.275348 | orchestrator | Wednesday 18 February 2026 04:32:05 +0000 (0:00:10.384) 0:01:18.852 **** 2026-02-18 04:32:15.275358 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:32:15.275378 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:32:22.003218 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:32:22.003352 | orchestrator | 2026-02-18 04:32:22.003380 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-18 04:32:22.003401 | orchestrator | Wednesday 18 February 2026 04:32:15 +0000 (0:00:09.293) 0:01:28.146 **** 2026-02-18 04:32:22.003423 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:32:22.003445 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:32:22.003466 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:32:22.003485 | orchestrator | 2026-02-18 04:32:22.003503 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:32:22.003515 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-18 04:32:22.003528 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 04:32:22.003539 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 04:32:22.003550 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-18 04:32:22.003561 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-18 04:32:22.003571 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-18 04:32:22.003583 | orchestrator | 2026-02-18 04:32:22.003594 | orchestrator | 2026-02-18 04:32:22.003605 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:32:22.003616 | orchestrator | Wednesday 18 February 2026 04:32:21 +0000 (0:00:06.317) 0:01:34.464 **** 2026-02-18 04:32:22.003627 | orchestrator | =============================================================================== 2026-02-18 04:32:22.003638 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.38s 2026-02-18 04:32:22.003672 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.29s 2026-02-18 04:32:22.003684 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.32s 2026-02-18 04:32:22.003694 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 5.00s 2026-02-18 04:32:22.003705 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.63s 2026-02-18 04:32:22.003716 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.09s 2026-02-18 04:32:22.003727 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.84s 2026-02-18 04:32:22.003740 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.53s 2026-02-18 04:32:22.003753 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.36s 2026-02-18 04:32:22.003766 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.68s 2026-02-18 04:32:22.003779 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.53s 2026-02-18 04:32:22.003791 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.28s 2026-02-18 04:32:22.003804 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.86s 2026-02-18 04:32:22.003817 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.59s 2026-02-18 04:32:22.003830 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.58s 2026-02-18 04:32:22.003843 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.57s 2026-02-18 04:32:22.003856 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.56s 2026-02-18 04:32:22.003869 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.48s 2026-02-18 04:32:22.003907 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.46s 2026-02-18 04:32:22.003921 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.42s 2026-02-18 04:32:24.266332 | orchestrator | 2026-02-18 04:32:24 | INFO  | Task 638d162b-ee20-4c85-99db-0dc2178bc7f9 (aodh) was prepared for execution. 2026-02-18 04:32:24.266432 | orchestrator | 2026-02-18 04:32:24 | INFO  | It takes a moment until task 638d162b-ee20-4c85-99db-0dc2178bc7f9 (aodh) has been started and output is visible here. 2026-02-18 04:32:57.086739 | orchestrator | 2026-02-18 04:32:57.086851 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:32:57.086866 | orchestrator | 2026-02-18 04:32:57.086877 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:32:57.086887 | orchestrator | Wednesday 18 February 2026 04:32:28 +0000 (0:00:00.258) 0:00:00.258 **** 2026-02-18 04:32:57.086897 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:32:57.086908 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:32:57.086918 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:32:57.086975 | orchestrator | 2026-02-18 04:32:57.086985 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:32:57.086995 | orchestrator | Wednesday 18 February 2026 04:32:28 +0000 (0:00:00.302) 0:00:00.560 **** 2026-02-18 04:32:57.087005 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-18 04:32:57.087029 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-18 04:32:57.087039 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-18 04:32:57.087049 | orchestrator | 2026-02-18 04:32:57.087059 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-18 04:32:57.087069 | orchestrator | 2026-02-18 04:32:57.087078 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-18 04:32:57.087088 | orchestrator | Wednesday 18 February 2026 04:32:29 +0000 (0:00:00.419) 0:00:00.980 **** 2026-02-18 04:32:57.087098 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:32:57.087108 | orchestrator | 2026-02-18 04:32:57.087118 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-18 04:32:57.087148 | orchestrator | Wednesday 18 February 2026 04:32:29 +0000 (0:00:00.546) 0:00:01.527 **** 2026-02-18 04:32:57.087159 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-18 04:32:57.087169 | orchestrator | 2026-02-18 04:32:57.087179 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-18 04:32:57.087189 | orchestrator | Wednesday 18 February 2026 04:32:33 +0000 (0:00:03.643) 0:00:05.171 **** 2026-02-18 04:32:57.087199 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-18 04:32:57.087208 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-18 04:32:57.087218 | orchestrator | 2026-02-18 04:32:57.087227 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-18 04:32:57.087237 | orchestrator | Wednesday 18 February 2026 04:32:40 +0000 (0:00:06.932) 0:00:12.103 **** 2026-02-18 04:32:57.087246 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:32:57.087257 | orchestrator | 2026-02-18 04:32:57.087266 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-18 04:32:57.087276 | orchestrator | Wednesday 18 February 2026 04:32:43 +0000 (0:00:03.437) 0:00:15.540 **** 2026-02-18 04:32:57.087287 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:32:57.087298 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-18 04:32:57.087310 | orchestrator | 2026-02-18 04:32:57.087321 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-18 04:32:57.087332 | orchestrator | Wednesday 18 February 2026 04:32:47 +0000 (0:00:03.891) 0:00:19.432 **** 2026-02-18 04:32:57.087343 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:32:57.087354 | orchestrator | 2026-02-18 04:32:57.087365 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-18 04:32:57.087377 | orchestrator | Wednesday 18 February 2026 04:32:51 +0000 (0:00:03.430) 0:00:22.862 **** 2026-02-18 04:32:57.087388 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-18 04:32:57.087399 | orchestrator | 2026-02-18 04:32:57.087410 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-18 04:32:57.087421 | orchestrator | Wednesday 18 February 2026 04:32:55 +0000 (0:00:03.953) 0:00:26.815 **** 2026-02-18 04:32:57.087436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:32:57.087470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:32:57.087495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:32:57.087508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:32:57.087522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:32:57.087534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:32:57.087546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:57.087566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:58.496282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:58.496415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:58.496443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:58.496465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:32:58.496483 | orchestrator | 2026-02-18 04:32:58.496504 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-18 04:32:58.496523 | orchestrator | Wednesday 18 February 2026 04:32:57 +0000 (0:00:02.002) 0:00:28.817 **** 2026-02-18 04:32:58.496542 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:32:58.496563 | orchestrator | 2026-02-18 04:32:58.496582 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-18 04:32:58.496602 | orchestrator | Wednesday 18 February 2026 04:32:57 +0000 (0:00:00.144) 0:00:28.962 **** 2026-02-18 04:32:58.496620 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:32:58.496638 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:32:58.496656 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:32:58.496675 | orchestrator | 2026-02-18 04:32:58.496693 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-18 04:32:58.496712 | orchestrator | Wednesday 18 February 2026 04:32:57 +0000 (0:00:00.548) 0:00:29.511 **** 2026-02-18 04:32:58.496729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:32:58.496789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:32:58.496813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:32:58.496828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:32:58.496842 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:32:58.496855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:32:58.496869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:32:58.496881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:32:58.496911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:03.518898 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:33:03.519053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:33:03.519071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:33:03.519082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:33:03.519091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:03.519099 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:33:03.519107 | orchestrator | 2026-02-18 04:33:03.519116 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-18 04:33:03.519126 | orchestrator | Wednesday 18 February 2026 04:32:58 +0000 (0:00:00.722) 0:00:30.234 **** 2026-02-18 04:33:03.519151 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:33:03.519160 | orchestrator | 2026-02-18 04:33:03.519168 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-18 04:33:03.519176 | orchestrator | Wednesday 18 February 2026 04:32:59 +0000 (0:00:00.719) 0:00:30.954 **** 2026-02-18 04:33:03.519185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:03.519213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:03.519222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:03.519231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:03.519239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:03.519253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:03.519262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:03.519291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:04.147022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:04.147108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:04.147120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:04.147130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:04.147161 | orchestrator | 2026-02-18 04:33:04.147172 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-18 04:33:04.147183 | orchestrator | Wednesday 18 February 2026 04:33:03 +0000 (0:00:04.292) 0:00:35.246 **** 2026-02-18 04:33:04.147198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:33:04.147228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:33:04.147254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:33:04.147264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:04.147273 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:33:04.147284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:33:04.147299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:33:04.147335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:33:04.147344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:04.147353 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:33:04.147374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:33:05.135873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:33:05.136019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:33:05.136059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:05.136073 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:33:05.136087 | orchestrator | 2026-02-18 04:33:05.136099 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-18 04:33:05.136112 | orchestrator | Wednesday 18 February 2026 04:33:04 +0000 (0:00:00.638) 0:00:35.885 **** 2026-02-18 04:33:05.136124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:33:05.136151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:33:05.136164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:33:05.136195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:05.136207 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:33:05.136226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:33:05.136238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:33:05.136250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:33:05.136261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:05.136273 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:33:05.136297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-18 04:33:09.232066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 04:33:09.232228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 04:33:09.232256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 04:33:09.232276 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:33:09.232298 | orchestrator | 2026-02-18 04:33:09.232317 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-18 04:33:09.232338 | orchestrator | Wednesday 18 February 2026 04:33:05 +0000 (0:00:00.986) 0:00:36.872 **** 2026-02-18 04:33:09.232351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:09.232386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:09.232420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:09.232444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:09.232455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:09.232467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:09.232478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:09.232495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:09.232508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:09.232537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887584 | orchestrator | 2026-02-18 04:33:17.887598 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-18 04:33:17.887611 | orchestrator | Wednesday 18 February 2026 04:33:09 +0000 (0:00:04.094) 0:00:40.966 **** 2026-02-18 04:33:17.887624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:17.887653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:17.887665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:17.887719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:17.887825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969654 | orchestrator | 2026-02-18 04:33:22.969675 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-18 04:33:22.969688 | orchestrator | Wednesday 18 February 2026 04:33:17 +0000 (0:00:08.650) 0:00:49.616 **** 2026-02-18 04:33:22.969699 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:33:22.969711 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:33:22.969722 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:33:22.969732 | orchestrator | 2026-02-18 04:33:22.969744 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-18 04:33:22.969754 | orchestrator | Wednesday 18 February 2026 04:33:19 +0000 (0:00:01.765) 0:00:51.382 **** 2026-02-18 04:33:22.969767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:22.969796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:22.969829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-18 04:33:22.969859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:22.969944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:33:22.970082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:34:13.270352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-18 04:34:13.270505 | orchestrator | 2026-02-18 04:34:13.270521 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-18 04:34:13.270544 | orchestrator | Wednesday 18 February 2026 04:33:22 +0000 (0:00:03.318) 0:00:54.700 **** 2026-02-18 04:34:13.270553 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:34:13.270563 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:34:13.270572 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:34:13.270580 | orchestrator | 2026-02-18 04:34:13.270589 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-18 04:34:13.270598 | orchestrator | Wednesday 18 February 2026 04:33:23 +0000 (0:00:00.340) 0:00:55.041 **** 2026-02-18 04:34:13.270606 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:34:13.270615 | orchestrator | 2026-02-18 04:34:13.270624 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-18 04:34:13.270633 | orchestrator | Wednesday 18 February 2026 04:33:25 +0000 (0:00:02.237) 0:00:57.278 **** 2026-02-18 04:34:13.270641 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:34:13.270684 | orchestrator | 2026-02-18 04:34:13.270700 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-18 04:34:13.270715 | orchestrator | Wednesday 18 February 2026 04:33:27 +0000 (0:00:02.282) 0:00:59.560 **** 2026-02-18 04:34:13.270731 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:34:13.270747 | orchestrator | 2026-02-18 04:34:13.270761 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-18 04:34:13.270776 | orchestrator | Wednesday 18 February 2026 04:33:41 +0000 (0:00:13.592) 0:01:13.153 **** 2026-02-18 04:34:13.270787 | orchestrator | 2026-02-18 04:34:13.270795 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-18 04:34:13.270804 | orchestrator | Wednesday 18 February 2026 04:33:41 +0000 (0:00:00.079) 0:01:13.233 **** 2026-02-18 04:34:13.270812 | orchestrator | 2026-02-18 04:34:13.270821 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-18 04:34:13.270829 | orchestrator | Wednesday 18 February 2026 04:33:41 +0000 (0:00:00.084) 0:01:13.317 **** 2026-02-18 04:34:13.270838 | orchestrator | 2026-02-18 04:34:13.270846 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-18 04:34:13.270855 | orchestrator | Wednesday 18 February 2026 04:33:41 +0000 (0:00:00.250) 0:01:13.568 **** 2026-02-18 04:34:13.270864 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:34:13.270886 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:34:13.270895 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:34:13.270903 | orchestrator | 2026-02-18 04:34:13.270911 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-18 04:34:13.270920 | orchestrator | Wednesday 18 February 2026 04:33:52 +0000 (0:00:10.506) 0:01:24.075 **** 2026-02-18 04:34:13.270928 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:34:13.270937 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:34:13.270945 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:34:13.270953 | orchestrator | 2026-02-18 04:34:13.270962 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-18 04:34:13.270970 | orchestrator | Wednesday 18 February 2026 04:33:57 +0000 (0:00:05.075) 0:01:29.150 **** 2026-02-18 04:34:13.270979 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:34:13.270987 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:34:13.270996 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:34:13.271060 | orchestrator | 2026-02-18 04:34:13.271080 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-18 04:34:13.271097 | orchestrator | Wednesday 18 February 2026 04:34:07 +0000 (0:00:10.227) 0:01:39.378 **** 2026-02-18 04:34:13.271111 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:34:13.271126 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:34:13.271136 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:34:13.271145 | orchestrator | 2026-02-18 04:34:13.271154 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:34:13.271164 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 04:34:13.271174 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:34:13.271183 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:34:13.271191 | orchestrator | 2026-02-18 04:34:13.271200 | orchestrator | 2026-02-18 04:34:13.271209 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:34:13.271217 | orchestrator | Wednesday 18 February 2026 04:34:12 +0000 (0:00:05.314) 0:01:44.692 **** 2026-02-18 04:34:13.271226 | orchestrator | =============================================================================== 2026-02-18 04:34:13.271234 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.59s 2026-02-18 04:34:13.271243 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.51s 2026-02-18 04:34:13.271278 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.23s 2026-02-18 04:34:13.271288 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.65s 2026-02-18 04:34:13.271297 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.93s 2026-02-18 04:34:13.271306 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.31s 2026-02-18 04:34:13.271314 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 5.08s 2026-02-18 04:34:13.271323 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.29s 2026-02-18 04:34:13.271331 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.09s 2026-02-18 04:34:13.271340 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.95s 2026-02-18 04:34:13.271348 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.89s 2026-02-18 04:34:13.271357 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.64s 2026-02-18 04:34:13.271368 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.44s 2026-02-18 04:34:13.271383 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.43s 2026-02-18 04:34:13.271397 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.32s 2026-02-18 04:34:13.271411 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.28s 2026-02-18 04:34:13.271426 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.24s 2026-02-18 04:34:13.271441 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.00s 2026-02-18 04:34:13.271457 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.77s 2026-02-18 04:34:13.271471 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 0.99s 2026-02-18 04:34:15.490481 | orchestrator | 2026-02-18 04:34:15 | INFO  | Task 0169f977-30af-4ce5-9d0d-9872a5749cf1 (kolla-ceph-rgw) was prepared for execution. 2026-02-18 04:34:15.490595 | orchestrator | 2026-02-18 04:34:15 | INFO  | It takes a moment until task 0169f977-30af-4ce5-9d0d-9872a5749cf1 (kolla-ceph-rgw) has been started and output is visible here. 2026-02-18 04:34:50.271388 | orchestrator | 2026-02-18 04:34:50.271505 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:34:50.271522 | orchestrator | 2026-02-18 04:34:50.271534 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:34:50.271546 | orchestrator | Wednesday 18 February 2026 04:34:19 +0000 (0:00:00.275) 0:00:00.275 **** 2026-02-18 04:34:50.271557 | orchestrator | ok: [testbed-manager] 2026-02-18 04:34:50.271568 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:34:50.271579 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:34:50.271590 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:34:50.271600 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:34:50.271611 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:34:50.271636 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:34:50.271647 | orchestrator | 2026-02-18 04:34:50.271658 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:34:50.271669 | orchestrator | Wednesday 18 February 2026 04:34:20 +0000 (0:00:00.844) 0:00:01.119 **** 2026-02-18 04:34:50.271680 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-18 04:34:50.271691 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-18 04:34:50.271702 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-18 04:34:50.271712 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-18 04:34:50.271723 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-18 04:34:50.271733 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-18 04:34:50.271744 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-18 04:34:50.271776 | orchestrator | 2026-02-18 04:34:50.271788 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-18 04:34:50.271798 | orchestrator | 2026-02-18 04:34:50.271809 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-18 04:34:50.271820 | orchestrator | Wednesday 18 February 2026 04:34:21 +0000 (0:00:00.745) 0:00:01.865 **** 2026-02-18 04:34:50.271831 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:34:50.271842 | orchestrator | 2026-02-18 04:34:50.271853 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-18 04:34:50.271864 | orchestrator | Wednesday 18 February 2026 04:34:22 +0000 (0:00:01.535) 0:00:03.401 **** 2026-02-18 04:34:50.271875 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-18 04:34:50.271886 | orchestrator | 2026-02-18 04:34:50.271896 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-18 04:34:50.271907 | orchestrator | Wednesday 18 February 2026 04:34:26 +0000 (0:00:03.673) 0:00:07.074 **** 2026-02-18 04:34:50.271918 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-18 04:34:50.271932 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-18 04:34:50.271945 | orchestrator | 2026-02-18 04:34:50.271958 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-18 04:34:50.271969 | orchestrator | Wednesday 18 February 2026 04:34:32 +0000 (0:00:06.047) 0:00:13.122 **** 2026-02-18 04:34:50.271982 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-18 04:34:50.271994 | orchestrator | 2026-02-18 04:34:50.272023 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-18 04:34:50.272064 | orchestrator | Wednesday 18 February 2026 04:34:35 +0000 (0:00:03.115) 0:00:16.237 **** 2026-02-18 04:34:50.272077 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:34:50.272090 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-18 04:34:50.272103 | orchestrator | 2026-02-18 04:34:50.272115 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-18 04:34:50.272125 | orchestrator | Wednesday 18 February 2026 04:34:39 +0000 (0:00:03.700) 0:00:19.938 **** 2026-02-18 04:34:50.272135 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-18 04:34:50.272146 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-18 04:34:50.272156 | orchestrator | 2026-02-18 04:34:50.272167 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-18 04:34:50.272177 | orchestrator | Wednesday 18 February 2026 04:34:45 +0000 (0:00:05.892) 0:00:25.831 **** 2026-02-18 04:34:50.272188 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-18 04:34:50.272198 | orchestrator | 2026-02-18 04:34:50.272209 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:34:50.272220 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:50.272231 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:50.272242 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:50.272253 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:50.272263 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:50.272300 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:50.272313 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:50.272323 | orchestrator | 2026-02-18 04:34:50.272334 | orchestrator | 2026-02-18 04:34:50.272345 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:34:50.272356 | orchestrator | Wednesday 18 February 2026 04:34:49 +0000 (0:00:04.693) 0:00:30.524 **** 2026-02-18 04:34:50.272366 | orchestrator | =============================================================================== 2026-02-18 04:34:50.272383 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.05s 2026-02-18 04:34:50.272394 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.89s 2026-02-18 04:34:50.272404 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.69s 2026-02-18 04:34:50.272415 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.70s 2026-02-18 04:34:50.272425 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.67s 2026-02-18 04:34:50.272436 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.12s 2026-02-18 04:34:50.272447 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.54s 2026-02-18 04:34:50.272457 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-02-18 04:34:50.272468 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-02-18 04:34:52.597348 | orchestrator | 2026-02-18 04:34:52 | INFO  | Task 45418c79-805e-403e-8c2f-84fa5759516b (gnocchi) was prepared for execution. 2026-02-18 04:34:52.597454 | orchestrator | 2026-02-18 04:34:52 | INFO  | It takes a moment until task 45418c79-805e-403e-8c2f-84fa5759516b (gnocchi) has been started and output is visible here. 2026-02-18 04:34:57.645695 | orchestrator | 2026-02-18 04:34:57.645788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:34:57.645800 | orchestrator | 2026-02-18 04:34:57.645808 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:34:57.645816 | orchestrator | Wednesday 18 February 2026 04:34:56 +0000 (0:00:00.266) 0:00:00.266 **** 2026-02-18 04:34:57.645824 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:34:57.645832 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:34:57.645839 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:34:57.645846 | orchestrator | 2026-02-18 04:34:57.645854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:34:57.645861 | orchestrator | Wednesday 18 February 2026 04:34:56 +0000 (0:00:00.332) 0:00:00.598 **** 2026-02-18 04:34:57.645868 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-18 04:34:57.645876 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-18 04:34:57.645883 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-18 04:34:57.645891 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-18 04:34:57.645898 | orchestrator | 2026-02-18 04:34:57.645905 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-18 04:34:57.645912 | orchestrator | skipping: no hosts matched 2026-02-18 04:34:57.645920 | orchestrator | 2026-02-18 04:34:57.645927 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:34:57.645934 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:57.645943 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:57.645950 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:34:57.645979 | orchestrator | 2026-02-18 04:34:57.645987 | orchestrator | 2026-02-18 04:34:57.645994 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:34:57.646001 | orchestrator | Wednesday 18 February 2026 04:34:57 +0000 (0:00:00.368) 0:00:00.967 **** 2026-02-18 04:34:57.646008 | orchestrator | =============================================================================== 2026-02-18 04:34:57.646077 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-02-18 04:34:57.646088 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-02-18 04:34:59.921489 | orchestrator | 2026-02-18 04:34:59 | INFO  | Task da06a400-d212-45ee-9819-f6bbb34bbb82 (manila) was prepared for execution. 2026-02-18 04:34:59.921587 | orchestrator | 2026-02-18 04:34:59 | INFO  | It takes a moment until task da06a400-d212-45ee-9819-f6bbb34bbb82 (manila) has been started and output is visible here. 2026-02-18 04:35:42.961592 | orchestrator | 2026-02-18 04:35:42.961713 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:35:42.961730 | orchestrator | 2026-02-18 04:35:42.961744 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:35:42.961756 | orchestrator | Wednesday 18 February 2026 04:35:03 +0000 (0:00:00.253) 0:00:00.253 **** 2026-02-18 04:35:42.961767 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:35:42.961779 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:35:42.961805 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:35:42.961817 | orchestrator | 2026-02-18 04:35:42.961828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:35:42.961839 | orchestrator | Wednesday 18 February 2026 04:35:04 +0000 (0:00:00.307) 0:00:00.560 **** 2026-02-18 04:35:42.961850 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-18 04:35:42.961862 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-18 04:35:42.961872 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-18 04:35:42.961884 | orchestrator | 2026-02-18 04:35:42.961894 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-18 04:35:42.961905 | orchestrator | 2026-02-18 04:35:42.961916 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-18 04:35:42.961927 | orchestrator | Wednesday 18 February 2026 04:35:04 +0000 (0:00:00.402) 0:00:00.962 **** 2026-02-18 04:35:42.961953 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:35:42.961966 | orchestrator | 2026-02-18 04:35:42.961977 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-18 04:35:42.961988 | orchestrator | Wednesday 18 February 2026 04:35:05 +0000 (0:00:00.533) 0:00:01.496 **** 2026-02-18 04:35:42.961999 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:35:42.962010 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:35:42.962073 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:35:42.962085 | orchestrator | 2026-02-18 04:35:42.962096 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-18 04:35:42.962140 | orchestrator | Wednesday 18 February 2026 04:35:05 +0000 (0:00:00.457) 0:00:01.954 **** 2026-02-18 04:35:42.962160 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-18 04:35:42.962181 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-18 04:35:42.962201 | orchestrator | 2026-02-18 04:35:42.962217 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-18 04:35:42.962230 | orchestrator | Wednesday 18 February 2026 04:35:12 +0000 (0:00:06.771) 0:00:08.725 **** 2026-02-18 04:35:42.962243 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-18 04:35:42.962256 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-18 04:35:42.962290 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-18 04:35:42.962301 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-18 04:35:42.962312 | orchestrator | 2026-02-18 04:35:42.962323 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-18 04:35:42.962333 | orchestrator | Wednesday 18 February 2026 04:35:25 +0000 (0:00:13.507) 0:00:22.233 **** 2026-02-18 04:35:42.962344 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:35:42.962355 | orchestrator | 2026-02-18 04:35:42.962365 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-18 04:35:42.962376 | orchestrator | Wednesday 18 February 2026 04:35:29 +0000 (0:00:03.356) 0:00:25.589 **** 2026-02-18 04:35:42.962387 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:35:42.962398 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-18 04:35:42.962408 | orchestrator | 2026-02-18 04:35:42.962419 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-18 04:35:42.962430 | orchestrator | Wednesday 18 February 2026 04:35:33 +0000 (0:00:04.181) 0:00:29.771 **** 2026-02-18 04:35:42.962440 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:35:42.962451 | orchestrator | 2026-02-18 04:35:42.962462 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-18 04:35:42.962472 | orchestrator | Wednesday 18 February 2026 04:35:36 +0000 (0:00:03.294) 0:00:33.065 **** 2026-02-18 04:35:42.962483 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-18 04:35:42.962494 | orchestrator | 2026-02-18 04:35:42.962505 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-18 04:35:42.962516 | orchestrator | Wednesday 18 February 2026 04:35:40 +0000 (0:00:04.036) 0:00:37.101 **** 2026-02-18 04:35:42.962550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:35:42.962566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:35:42.962585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:35:42.962607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:42.962670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:42.962682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:42.962702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:53.255748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:53.255855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:53.255878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:53.255884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:53.255888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:53.255892 | orchestrator | 2026-02-18 04:35:53.255897 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-18 04:35:53.255903 | orchestrator | Wednesday 18 February 2026 04:35:43 +0000 (0:00:02.217) 0:00:39.319 **** 2026-02-18 04:35:53.255907 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:35:53.255911 | orchestrator | 2026-02-18 04:35:53.255915 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-18 04:35:53.255918 | orchestrator | Wednesday 18 February 2026 04:35:43 +0000 (0:00:00.569) 0:00:39.888 **** 2026-02-18 04:35:53.255922 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:35:53.255927 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:35:53.255931 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:35:53.255934 | orchestrator | 2026-02-18 04:35:53.255939 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-18 04:35:53.255945 | orchestrator | Wednesday 18 February 2026 04:35:44 +0000 (0:00:00.925) 0:00:40.814 **** 2026-02-18 04:35:53.255952 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-18 04:35:53.255972 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-18 04:35:53.255980 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-18 04:35:53.255988 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-18 04:35:53.255992 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-18 04:35:53.256000 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-18 04:35:53.256004 | orchestrator | 2026-02-18 04:35:53.256007 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-18 04:35:53.256011 | orchestrator | Wednesday 18 February 2026 04:35:46 +0000 (0:00:01.751) 0:00:42.565 **** 2026-02-18 04:35:53.256015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-18 04:35:53.256019 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-18 04:35:53.256022 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-18 04:35:53.256026 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-18 04:35:53.256030 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-18 04:35:53.256033 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-18 04:35:53.256037 | orchestrator | 2026-02-18 04:35:53.256041 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-18 04:35:53.256045 | orchestrator | Wednesday 18 February 2026 04:35:47 +0000 (0:00:01.219) 0:00:43.785 **** 2026-02-18 04:35:53.256049 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-18 04:35:53.256060 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-18 04:35:53.256065 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-18 04:35:53.256068 | orchestrator | 2026-02-18 04:35:53.256072 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-18 04:35:53.256076 | orchestrator | Wednesday 18 February 2026 04:35:48 +0000 (0:00:00.662) 0:00:44.447 **** 2026-02-18 04:35:53.256080 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:35:53.256083 | orchestrator | 2026-02-18 04:35:53.256087 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-18 04:35:53.256091 | orchestrator | Wednesday 18 February 2026 04:35:48 +0000 (0:00:00.129) 0:00:44.576 **** 2026-02-18 04:35:53.256095 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:35:53.256098 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:35:53.256102 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:35:53.256106 | orchestrator | 2026-02-18 04:35:53.256109 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-18 04:35:53.256134 | orchestrator | Wednesday 18 February 2026 04:35:48 +0000 (0:00:00.495) 0:00:45.071 **** 2026-02-18 04:35:53.256141 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:35:53.256147 | orchestrator | 2026-02-18 04:35:53.256153 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-18 04:35:53.256158 | orchestrator | Wednesday 18 February 2026 04:35:49 +0000 (0:00:00.582) 0:00:45.654 **** 2026-02-18 04:35:53.256177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:35:54.087293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:35:54.087393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:35:54.087401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:35:54.087485 | orchestrator | 2026-02-18 04:35:54.087490 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-18 04:35:54.087494 | orchestrator | Wednesday 18 February 2026 04:35:53 +0000 (0:00:03.975) 0:00:49.630 **** 2026-02-18 04:35:54.087508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:35:54.729650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729764 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:35:54.729775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:35:54.729808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729857 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:35:54.729867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:35:54.729876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:35:54.729911 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:35:54.729920 | orchestrator | 2026-02-18 04:35:54.729930 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-18 04:35:54.729940 | orchestrator | Wednesday 18 February 2026 04:35:54 +0000 (0:00:00.840) 0:00:50.470 **** 2026-02-18 04:35:54.730002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:35:59.081944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082252 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:35:59.082277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:35:59.082300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082402 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:35:59.082425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:35:59.082461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:35:59.082532 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:35:59.082556 | orchestrator | 2026-02-18 04:35:59.082580 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-18 04:35:59.082606 | orchestrator | Wednesday 18 February 2026 04:35:55 +0000 (0:00:00.850) 0:00:51.321 **** 2026-02-18 04:35:59.082651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:05.939829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:05.939958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:05.939976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.939991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:05.940125 | orchestrator | 2026-02-18 04:36:05.940196 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-18 04:36:05.940209 | orchestrator | Wednesday 18 February 2026 04:35:59 +0000 (0:00:04.445) 0:00:55.766 **** 2026-02-18 04:36:05.940235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:10.123399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:10.123511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:10.123519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:10.123526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:36:10.123539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:10.123552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:36:10.123561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:10.123565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:36:10.123570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:10.123574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:10.123578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:10.123582 | orchestrator | 2026-02-18 04:36:10.123587 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-18 04:36:10.123599 | orchestrator | Wednesday 18 February 2026 04:36:06 +0000 (0:00:06.547) 0:01:02.314 **** 2026-02-18 04:36:10.123604 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-18 04:36:10.123611 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-18 04:36:10.123615 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-18 04:36:10.123618 | orchestrator | 2026-02-18 04:36:10.123622 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-18 04:36:10.123629 | orchestrator | Wednesday 18 February 2026 04:36:09 +0000 (0:00:03.521) 0:01:05.835 **** 2026-02-18 04:36:10.123638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:36:13.257020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257129 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:36:13.257185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:36:13.257209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257260 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:36:13.257267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-18 04:36:13.257273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 04:36:13.257301 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:36:13.257308 | orchestrator | 2026-02-18 04:36:13.257315 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-18 04:36:13.257323 | orchestrator | Wednesday 18 February 2026 04:36:10 +0000 (0:00:00.657) 0:01:06.492 **** 2026-02-18 04:36:13.257335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:52.641476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:52.641594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-18 04:36:52.641611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-18 04:36:52.641792 | orchestrator | 2026-02-18 04:36:52.641805 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-18 04:36:52.641817 | orchestrator | Wednesday 18 February 2026 04:36:13 +0000 (0:00:03.132) 0:01:09.625 **** 2026-02-18 04:36:52.641829 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:36:52.641840 | orchestrator | 2026-02-18 04:36:52.641851 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-18 04:36:52.641862 | orchestrator | Wednesday 18 February 2026 04:36:15 +0000 (0:00:02.109) 0:01:11.735 **** 2026-02-18 04:36:52.641873 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:36:52.641883 | orchestrator | 2026-02-18 04:36:52.641894 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-18 04:36:52.641905 | orchestrator | Wednesday 18 February 2026 04:36:17 +0000 (0:00:02.241) 0:01:13.976 **** 2026-02-18 04:36:52.641915 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:36:52.641926 | orchestrator | 2026-02-18 04:36:52.641937 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-18 04:36:52.641948 | orchestrator | Wednesday 18 February 2026 04:36:52 +0000 (0:00:34.709) 0:01:48.685 **** 2026-02-18 04:36:52.641958 | orchestrator | 2026-02-18 04:36:52.641976 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-18 04:37:37.278978 | orchestrator | Wednesday 18 February 2026 04:36:52 +0000 (0:00:00.072) 0:01:48.757 **** 2026-02-18 04:37:37.279101 | orchestrator | 2026-02-18 04:37:37.279119 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-18 04:37:37.279131 | orchestrator | Wednesday 18 February 2026 04:36:52 +0000 (0:00:00.073) 0:01:48.830 **** 2026-02-18 04:37:37.279142 | orchestrator | 2026-02-18 04:37:37.279152 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-18 04:37:37.279164 | orchestrator | Wednesday 18 February 2026 04:36:52 +0000 (0:00:00.070) 0:01:48.901 **** 2026-02-18 04:37:37.279174 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:37:37.279186 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:37:37.279197 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:37:37.279208 | orchestrator | 2026-02-18 04:37:37.279218 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-18 04:37:37.279292 | orchestrator | Wednesday 18 February 2026 04:37:02 +0000 (0:00:10.151) 0:01:59.053 **** 2026-02-18 04:37:37.279304 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:37:37.279315 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:37:37.279326 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:37:37.279337 | orchestrator | 2026-02-18 04:37:37.279348 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-18 04:37:37.279386 | orchestrator | Wednesday 18 February 2026 04:37:08 +0000 (0:00:05.741) 0:02:04.794 **** 2026-02-18 04:37:37.279398 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:37:37.279409 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:37:37.279420 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:37:37.279431 | orchestrator | 2026-02-18 04:37:37.279441 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-18 04:37:37.279452 | orchestrator | Wednesday 18 February 2026 04:37:18 +0000 (0:00:10.047) 0:02:14.841 **** 2026-02-18 04:37:37.279463 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:37:37.279474 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:37:37.279484 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:37:37.279495 | orchestrator | 2026-02-18 04:37:37.279506 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:37:37.279519 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 04:37:37.279532 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:37:37.279544 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:37:37.279557 | orchestrator | 2026-02-18 04:37:37.279569 | orchestrator | 2026-02-18 04:37:37.279581 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:37:37.279594 | orchestrator | Wednesday 18 February 2026 04:37:36 +0000 (0:00:18.298) 0:02:33.140 **** 2026-02-18 04:37:37.279607 | orchestrator | =============================================================================== 2026-02-18 04:37:37.279619 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 34.71s 2026-02-18 04:37:37.279631 | orchestrator | manila : Restart manila-share container -------------------------------- 18.30s 2026-02-18 04:37:37.279643 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.51s 2026-02-18 04:37:37.279655 | orchestrator | manila : Restart manila-api container ---------------------------------- 10.15s 2026-02-18 04:37:37.279667 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.05s 2026-02-18 04:37:37.279698 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.77s 2026-02-18 04:37:37.279720 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.55s 2026-02-18 04:37:37.279739 | orchestrator | manila : Restart manila-data container ---------------------------------- 5.74s 2026-02-18 04:37:37.279759 | orchestrator | manila : Copying over config.json files for services -------------------- 4.45s 2026-02-18 04:37:37.279778 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.18s 2026-02-18 04:37:37.279798 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 4.04s 2026-02-18 04:37:37.279817 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.98s 2026-02-18 04:37:37.279838 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.52s 2026-02-18 04:37:37.279857 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.36s 2026-02-18 04:37:37.279878 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.29s 2026-02-18 04:37:37.279897 | orchestrator | manila : Check manila containers ---------------------------------------- 3.13s 2026-02-18 04:37:37.279916 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.24s 2026-02-18 04:37:37.279934 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.22s 2026-02-18 04:37:37.279952 | orchestrator | manila : Creating Manila database --------------------------------------- 2.11s 2026-02-18 04:37:37.279969 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.75s 2026-02-18 04:37:37.568393 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-18 04:37:49.674358 | orchestrator | 2026-02-18 04:37:49 | INFO  | Task d370e725-01ee-4b8b-8801-d1c02ed8aa18 (netdata) was prepared for execution. 2026-02-18 04:37:49.674477 | orchestrator | 2026-02-18 04:37:49 | INFO  | It takes a moment until task d370e725-01ee-4b8b-8801-d1c02ed8aa18 (netdata) has been started and output is visible here. 2026-02-18 04:39:27.340785 | orchestrator | 2026-02-18 04:39:27.340877 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:39:27.340888 | orchestrator | 2026-02-18 04:39:27.340895 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:39:27.340902 | orchestrator | Wednesday 18 February 2026 04:37:53 +0000 (0:00:00.228) 0:00:00.228 **** 2026-02-18 04:39:27.340909 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-18 04:39:27.340916 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-18 04:39:27.340922 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-18 04:39:27.340928 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-18 04:39:27.340934 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-18 04:39:27.340941 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-18 04:39:27.340947 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-18 04:39:27.340953 | orchestrator | 2026-02-18 04:39:27.340959 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-18 04:39:27.340965 | orchestrator | 2026-02-18 04:39:27.340971 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-18 04:39:27.340977 | orchestrator | Wednesday 18 February 2026 04:37:54 +0000 (0:00:00.872) 0:00:01.101 **** 2026-02-18 04:39:27.340985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:39:27.340993 | orchestrator | 2026-02-18 04:39:27.340999 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-18 04:39:27.341006 | orchestrator | Wednesday 18 February 2026 04:37:56 +0000 (0:00:01.291) 0:00:02.392 **** 2026-02-18 04:39:27.341012 | orchestrator | ok: [testbed-manager] 2026-02-18 04:39:27.341019 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:39:27.341026 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:39:27.341033 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:39:27.341039 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:39:27.341045 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:39:27.341051 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:39:27.341057 | orchestrator | 2026-02-18 04:39:27.341064 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-18 04:39:27.341070 | orchestrator | Wednesday 18 February 2026 04:37:57 +0000 (0:00:01.764) 0:00:04.157 **** 2026-02-18 04:39:27.341076 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:39:27.341082 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:39:27.341088 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:39:27.341095 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:39:27.341101 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:39:27.341107 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:39:27.341113 | orchestrator | ok: [testbed-manager] 2026-02-18 04:39:27.341119 | orchestrator | 2026-02-18 04:39:27.341126 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-18 04:39:27.341132 | orchestrator | Wednesday 18 February 2026 04:37:59 +0000 (0:00:01.931) 0:00:06.089 **** 2026-02-18 04:39:27.341138 | orchestrator | changed: [testbed-manager] 2026-02-18 04:39:27.341145 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:39:27.341151 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:39:27.341157 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:39:27.341163 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:39:27.341187 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:39:27.341193 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:39:27.341199 | orchestrator | 2026-02-18 04:39:27.341206 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-18 04:39:27.341223 | orchestrator | Wednesday 18 February 2026 04:38:01 +0000 (0:00:01.533) 0:00:07.622 **** 2026-02-18 04:39:27.341229 | orchestrator | changed: [testbed-manager] 2026-02-18 04:39:27.341236 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:39:27.341242 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:39:27.341248 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:39:27.341254 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:39:27.341260 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:39:27.341266 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:39:27.341272 | orchestrator | 2026-02-18 04:39:27.341278 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-18 04:39:27.341285 | orchestrator | Wednesday 18 February 2026 04:38:21 +0000 (0:00:20.436) 0:00:28.059 **** 2026-02-18 04:39:27.341291 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:39:27.341297 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:39:27.341303 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:39:27.341309 | orchestrator | changed: [testbed-manager] 2026-02-18 04:39:27.341315 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:39:27.341321 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:39:27.341328 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:39:27.341334 | orchestrator | 2026-02-18 04:39:27.341340 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-18 04:39:27.341366 | orchestrator | Wednesday 18 February 2026 04:39:02 +0000 (0:00:40.805) 0:01:08.865 **** 2026-02-18 04:39:27.341374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:39:27.341383 | orchestrator | 2026-02-18 04:39:27.341390 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-18 04:39:27.341397 | orchestrator | Wednesday 18 February 2026 04:39:04 +0000 (0:00:01.525) 0:01:10.390 **** 2026-02-18 04:39:27.341405 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-18 04:39:27.341412 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-18 04:39:27.341419 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-18 04:39:27.341426 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-18 04:39:27.341445 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-18 04:39:27.341453 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-18 04:39:27.341460 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-18 04:39:27.341467 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-18 04:39:27.341474 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-18 04:39:27.341481 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-18 04:39:27.341487 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-18 04:39:27.341494 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-18 04:39:27.341501 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-18 04:39:27.341508 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-18 04:39:27.341515 | orchestrator | 2026-02-18 04:39:27.341522 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-18 04:39:27.341530 | orchestrator | Wednesday 18 February 2026 04:39:07 +0000 (0:00:03.338) 0:01:13.728 **** 2026-02-18 04:39:27.341537 | orchestrator | ok: [testbed-manager] 2026-02-18 04:39:27.341544 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:39:27.341552 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:39:27.341559 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:39:27.341572 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:39:27.341579 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:39:27.341586 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:39:27.341592 | orchestrator | 2026-02-18 04:39:27.341598 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-18 04:39:27.341605 | orchestrator | Wednesday 18 February 2026 04:39:08 +0000 (0:00:01.255) 0:01:14.984 **** 2026-02-18 04:39:27.341611 | orchestrator | changed: [testbed-manager] 2026-02-18 04:39:27.341617 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:39:27.341623 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:39:27.341629 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:39:27.341635 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:39:27.341642 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:39:27.341648 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:39:27.341654 | orchestrator | 2026-02-18 04:39:27.341660 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-18 04:39:27.341667 | orchestrator | Wednesday 18 February 2026 04:39:09 +0000 (0:00:01.258) 0:01:16.243 **** 2026-02-18 04:39:27.341673 | orchestrator | ok: [testbed-manager] 2026-02-18 04:39:27.341679 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:39:27.341685 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:39:27.341692 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:39:27.341698 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:39:27.341704 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:39:27.341710 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:39:27.341716 | orchestrator | 2026-02-18 04:39:27.341722 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-18 04:39:27.341729 | orchestrator | Wednesday 18 February 2026 04:39:11 +0000 (0:00:01.223) 0:01:17.467 **** 2026-02-18 04:39:27.341735 | orchestrator | ok: [testbed-manager] 2026-02-18 04:39:27.341741 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:39:27.341747 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:39:27.341753 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:39:27.341759 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:39:27.341765 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:39:27.341771 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:39:27.341778 | orchestrator | 2026-02-18 04:39:27.341784 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-18 04:39:27.341790 | orchestrator | Wednesday 18 February 2026 04:39:12 +0000 (0:00:01.605) 0:01:19.073 **** 2026-02-18 04:39:27.341796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-18 04:39:27.341809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:39:27.341815 | orchestrator | 2026-02-18 04:39:27.341822 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-18 04:39:27.341828 | orchestrator | Wednesday 18 February 2026 04:39:14 +0000 (0:00:01.404) 0:01:20.477 **** 2026-02-18 04:39:27.341834 | orchestrator | changed: [testbed-manager] 2026-02-18 04:39:27.341840 | orchestrator | 2026-02-18 04:39:27.341846 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-18 04:39:27.341852 | orchestrator | Wednesday 18 February 2026 04:39:16 +0000 (0:00:02.051) 0:01:22.528 **** 2026-02-18 04:39:27.341858 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:39:27.341864 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:39:27.341870 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:39:27.341876 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:39:27.341882 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:39:27.341888 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:39:27.341895 | orchestrator | changed: [testbed-manager] 2026-02-18 04:39:27.341901 | orchestrator | 2026-02-18 04:39:27.341907 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:39:27.341918 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:39:27.341925 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:39:27.341931 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:39:27.341937 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:39:27.341948 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:39:27.782697 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:39:27.782794 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:39:27.782807 | orchestrator | 2026-02-18 04:39:27.782818 | orchestrator | 2026-02-18 04:39:27.782828 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:39:27.782839 | orchestrator | Wednesday 18 February 2026 04:39:27 +0000 (0:00:11.122) 0:01:33.651 **** 2026-02-18 04:39:27.782848 | orchestrator | =============================================================================== 2026-02-18 04:39:27.782858 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.81s 2026-02-18 04:39:27.782867 | orchestrator | osism.services.netdata : Add repository -------------------------------- 20.44s 2026-02-18 04:39:27.782877 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.12s 2026-02-18 04:39:27.782886 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.34s 2026-02-18 04:39:27.782896 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.05s 2026-02-18 04:39:27.782905 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 1.93s 2026-02-18 04:39:27.782914 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.76s 2026-02-18 04:39:27.782924 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.61s 2026-02-18 04:39:27.782933 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.53s 2026-02-18 04:39:27.782942 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.53s 2026-02-18 04:39:27.782952 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.40s 2026-02-18 04:39:27.782961 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.29s 2026-02-18 04:39:27.782970 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.26s 2026-02-18 04:39:27.782979 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.26s 2026-02-18 04:39:27.782990 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.22s 2026-02-18 04:39:27.782999 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-02-18 04:39:30.300311 | orchestrator | 2026-02-18 04:39:30 | INFO  | Task d0dc0d87-340f-450f-8633-7703810638ea (prometheus) was prepared for execution. 2026-02-18 04:39:30.300469 | orchestrator | 2026-02-18 04:39:30 | INFO  | It takes a moment until task d0dc0d87-340f-450f-8633-7703810638ea (prometheus) has been started and output is visible here. 2026-02-18 04:39:38.586551 | orchestrator | 2026-02-18 04:39:38.586694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:39:38.586737 | orchestrator | 2026-02-18 04:39:38.586759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:39:38.586807 | orchestrator | Wednesday 18 February 2026 04:39:34 +0000 (0:00:00.275) 0:00:00.275 **** 2026-02-18 04:39:38.586828 | orchestrator | ok: [testbed-manager] 2026-02-18 04:39:38.586847 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:39:38.586882 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:39:38.586902 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:39:38.586922 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:39:38.586941 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:39:38.586961 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:39:38.586980 | orchestrator | 2026-02-18 04:39:38.587000 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:39:38.587018 | orchestrator | Wednesday 18 February 2026 04:39:35 +0000 (0:00:00.825) 0:00:01.100 **** 2026-02-18 04:39:38.587037 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-18 04:39:38.587056 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-18 04:39:38.587076 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-18 04:39:38.587095 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-18 04:39:38.587117 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-18 04:39:38.587135 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-18 04:39:38.587154 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-18 04:39:38.587171 | orchestrator | 2026-02-18 04:39:38.587188 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-18 04:39:38.587226 | orchestrator | 2026-02-18 04:39:38.587258 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-18 04:39:38.587275 | orchestrator | Wednesday 18 February 2026 04:39:35 +0000 (0:00:00.694) 0:00:01.795 **** 2026-02-18 04:39:38.587294 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:39:38.587314 | orchestrator | 2026-02-18 04:39:38.587332 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-18 04:39:38.587350 | orchestrator | Wednesday 18 February 2026 04:39:36 +0000 (0:00:01.051) 0:00:02.846 **** 2026-02-18 04:39:38.587448 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-18 04:39:38.587475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:38.587496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:38.587533 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:38.587590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:38.587605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:38.587617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:38.587628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:38.587639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:38.587651 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:38.587662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:38.587689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.470886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:39.470957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:39.470969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:39.470979 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-18 04:39:39.470989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:39.471031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471044 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471052 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:39.471059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:39.471112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:43.936060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:43.936173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:43.936189 | orchestrator | 2026-02-18 04:39:43.936202 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-18 04:39:43.936215 | orchestrator | Wednesday 18 February 2026 04:39:39 +0000 (0:00:02.467) 0:00:05.313 **** 2026-02-18 04:39:43.936228 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 04:39:43.936240 | orchestrator | 2026-02-18 04:39:43.936251 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-18 04:39:43.936261 | orchestrator | Wednesday 18 February 2026 04:39:40 +0000 (0:00:01.462) 0:00:06.776 **** 2026-02-18 04:39:43.936274 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-18 04:39:43.936312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:43.936325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:43.936336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:43.936452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:43.936469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:43.936481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:43.936492 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:43.936512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:43.936524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:43.936536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:43.936548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:43.936574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.141811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.141915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.141956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:46.141969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:46.141980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:46.141993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.142187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.142231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.142246 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-18 04:39:46.142275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.142287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.142299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:46.142310 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:46.142323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:46.142346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:47.609543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:47.609666 | orchestrator | 2026-02-18 04:39:47.609682 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-18 04:39:47.609694 | orchestrator | Wednesday 18 February 2026 04:39:46 +0000 (0:00:05.199) 0:00:11.976 **** 2026-02-18 04:39:47.609708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-18 04:39:47.609721 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:47.609734 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.609800 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-18 04:39:47.609833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.609846 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:39:47.609858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:47.609880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.609892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.609903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.609914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.609925 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:39:47.609936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:47.609953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.609972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.911600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.911706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.911723 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:39:47.911738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:47.911750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.911762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.911791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.911803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:47.911834 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:39:47.911866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:47.911878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.911889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.911901 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:39:47.911912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:47.911923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.911935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 04:39:47.911946 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:39:47.911963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:47.911990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:49.404500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 04:39:49.404608 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:39:49.404626 | orchestrator | 2026-02-18 04:39:49.404639 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-18 04:39:49.404651 | orchestrator | Wednesday 18 February 2026 04:39:47 +0000 (0:00:01.783) 0:00:13.760 **** 2026-02-18 04:39:49.404664 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-18 04:39:49.404678 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:49.404691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:49.404723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-18 04:39:49.404784 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:49.404799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:49.404811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:49.404822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:49.404833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:49.404845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:49.404857 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:39:49.404868 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:39:49.404893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:49.404904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:49.404924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:50.102464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:50.102550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:50.102561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:50.102568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:50.102575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:50.102610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:50.102617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 04:39:50.102624 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:39:50.102632 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:39:50.102652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:50.102664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:50.102675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 04:39:50.102685 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:39:50.102695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:50.102705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:50.102729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 04:39:50.102741 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:39:50.102751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 04:39:50.102767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 04:39:53.477090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 04:39:53.477169 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:39:53.477179 | orchestrator | 2026-02-18 04:39:53.477187 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-18 04:39:53.477195 | orchestrator | Wednesday 18 February 2026 04:39:50 +0000 (0:00:02.186) 0:00:15.947 **** 2026-02-18 04:39:53.477203 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-18 04:39:53.477211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:53.477237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:53.477255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:53.477262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:53.477280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:53.477287 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:53.477293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:39:53.477300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:53.477313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:53.477320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:53.477331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:53.477337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:53.477350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.100894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.100989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:56.101023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:56.101033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:56.101057 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-18 04:39:56.101070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.101094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.101104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.101114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.101130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.101139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:39:56.101152 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:56.101162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:56.101171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:56.101188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:39:59.962643 | orchestrator | 2026-02-18 04:39:59.962751 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-18 04:39:59.962768 | orchestrator | Wednesday 18 February 2026 04:39:56 +0000 (0:00:05.991) 0:00:21.939 **** 2026-02-18 04:39:59.962779 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 04:39:59.962789 | orchestrator | 2026-02-18 04:39:59.962799 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-18 04:39:59.962835 | orchestrator | Wednesday 18 February 2026 04:39:56 +0000 (0:00:00.876) 0:00:22.815 **** 2026-02-18 04:39:59.962848 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.465067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962862 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.465067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962872 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.465067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962896 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.465067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962907 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320093, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4824054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962917 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.465067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:39:59.962943 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.465067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962961 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.465067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962971 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320093, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4824054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320093, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4824054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.962996 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4631827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.963006 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320093, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4824054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.963016 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4631827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:39:59.963033 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4631827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566319 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320093, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4824054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566459 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320065, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4708228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566477 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4631827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566503 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320093, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4824054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566515 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320065, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4708228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566526 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320065, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4708228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566554 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320039, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566581 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320093, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4824054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:01.566592 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320039, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566602 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4631827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566617 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4631827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566627 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320039, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566637 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320065, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4708228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566654 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320065, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4708228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:01.566672 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320050, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4656546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906757 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320050, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4656546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906816 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320065, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4708228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906833 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320050, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4656546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906840 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320039, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906846 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320062, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4699538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906862 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320039, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906868 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320039, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906883 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320062, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4699538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906890 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320062, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4699538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906898 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320050, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4656546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906904 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320056, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4680033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906909 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320050, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4656546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906919 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320050, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4656546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906924 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320056, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4680033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:02.906934 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320045, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4642181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221027 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4631827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:04.221214 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320056, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4680033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221234 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320062, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4699538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221273 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320062, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4699538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221285 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320062, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4699538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221296 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320045, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4642181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221308 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320056, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4680033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221337 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320045, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4642181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221355 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320092, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221366 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320056, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4680033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221408 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320056, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4680033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221421 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320045, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4642181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221433 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320045, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4642181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221444 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319751, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3651016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:04.221464 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320092, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.507948 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320092, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508056 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320092, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508091 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320092, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319751, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3651016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508153 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320065, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4708228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:05.508165 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320045, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4642181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508177 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319751, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3651016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508214 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320107, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319751, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3651016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508281 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319751, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3651016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508302 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320107, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508321 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320107, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508340 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320092, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508360 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320107, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:05.508448 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320107, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761461 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320068, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761550 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320068, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761564 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319751, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3651016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761575 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320068, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761585 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320039, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:06.761595 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320068, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761605 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320040, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4627526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761660 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320068, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761672 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320107, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761683 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320040, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4627526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761692 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320040, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4627526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761702 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320040, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4627526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761712 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320068, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761722 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320037, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:06.761751 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320040, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4627526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.019911 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320037, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020062 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320037, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320037, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020090 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320050, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4656546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:08.020101 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320037, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020135 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320040, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4627526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020159 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320060, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4694543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020188 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320060, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4694543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020201 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320060, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4694543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020212 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320060, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4694543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020223 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320060, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4694543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020234 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320037, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020260 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320058, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4686422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020277 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320058, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4686422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:08.020297 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320058, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4686422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636215 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320058, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4686422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636330 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320058, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4686422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636346 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320105, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636359 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:13.636372 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320060, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4694543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636449 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320105, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636463 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:13.636490 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320062, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4699538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:13.636520 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320105, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636533 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:13.636544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320105, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636555 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:13.636567 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320105, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636578 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:13.636589 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320058, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4686422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636609 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320105, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-18 04:40:13.636620 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:13.636637 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320056, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4680033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:13.636648 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320045, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4642181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:13.636668 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320092, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.617781 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319751, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3651016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.617929 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320107, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.617958 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320068, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.481183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.618008 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320040, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4627526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.618107 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320037, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4618926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.618142 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320060, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4694543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.618155 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320058, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4686422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.618204 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320105, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.4851155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-18 04:40:38.618221 | orchestrator | 2026-02-18 04:40:38.618235 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-18 04:40:38.618249 | orchestrator | Wednesday 18 February 2026 04:40:20 +0000 (0:00:23.689) 0:00:46.505 **** 2026-02-18 04:40:38.618260 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 04:40:38.618272 | orchestrator | 2026-02-18 04:40:38.618285 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-18 04:40:38.618297 | orchestrator | Wednesday 18 February 2026 04:40:21 +0000 (0:00:00.747) 0:00:47.253 **** 2026-02-18 04:40:38.618320 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:38.618334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618401 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-18 04:40:38.618414 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618480 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-18 04:40:38.618493 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:38.618505 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618517 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-18 04:40:38.618530 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618542 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-18 04:40:38.618554 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:38.618566 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618578 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-18 04:40:38.618590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618602 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-18 04:40:38.618614 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:38.618627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618639 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-18 04:40:38.618650 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618661 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-18 04:40:38.618672 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:38.618683 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618693 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-18 04:40:38.618704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618715 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-18 04:40:38.618725 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:38.618736 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618747 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-18 04:40:38.618757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618768 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-18 04:40:38.618779 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:38.618796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618807 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-18 04:40:38.618818 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-18 04:40:38.618828 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-18 04:40:38.618839 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 04:40:38.618850 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:40:38.618860 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-18 04:40:38.618871 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-18 04:40:38.618882 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-18 04:40:38.618892 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-18 04:40:38.618903 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-18 04:40:38.618914 | orchestrator | 2026-02-18 04:40:38.618925 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-18 04:40:38.618935 | orchestrator | Wednesday 18 February 2026 04:40:23 +0000 (0:00:01.756) 0:00:49.009 **** 2026-02-18 04:40:38.618957 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-18 04:40:38.618969 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-18 04:40:38.618980 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:38.618991 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:38.619002 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-18 04:40:38.619013 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:38.619034 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-18 04:40:55.129486 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.129637 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-18 04:40:55.129665 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.129685 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-18 04:40:55.129704 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.129723 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-18 04:40:55.129741 | orchestrator | 2026-02-18 04:40:55.129762 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-18 04:40:55.129784 | orchestrator | Wednesday 18 February 2026 04:40:38 +0000 (0:00:15.453) 0:01:04.462 **** 2026-02-18 04:40:55.129803 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-18 04:40:55.129823 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-18 04:40:55.129841 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:55.129860 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:55.129879 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-18 04:40:55.129898 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:55.129918 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-18 04:40:55.129936 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.129955 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-18 04:40:55.129975 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.129996 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-18 04:40:55.130099 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.130123 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-18 04:40:55.130146 | orchestrator | 2026-02-18 04:40:55.130169 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-18 04:40:55.130190 | orchestrator | Wednesday 18 February 2026 04:40:41 +0000 (0:00:02.989) 0:01:07.452 **** 2026-02-18 04:40:55.130211 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-18 04:40:55.130231 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:55.130254 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-18 04:40:55.130275 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:55.130296 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-18 04:40:55.130316 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:55.130336 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-18 04:40:55.130357 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.130410 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-18 04:40:55.130432 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.130479 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-18 04:40:55.130498 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.130537 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-18 04:40:55.130558 | orchestrator | 2026-02-18 04:40:55.130579 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-18 04:40:55.130599 | orchestrator | Wednesday 18 February 2026 04:40:43 +0000 (0:00:01.775) 0:01:09.227 **** 2026-02-18 04:40:55.130647 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 04:40:55.130668 | orchestrator | 2026-02-18 04:40:55.130689 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-18 04:40:55.130709 | orchestrator | Wednesday 18 February 2026 04:40:44 +0000 (0:00:00.723) 0:01:09.951 **** 2026-02-18 04:40:55.130728 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:40:55.130746 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:55.130765 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:55.130803 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:55.130822 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.130840 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.130858 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.130876 | orchestrator | 2026-02-18 04:40:55.130895 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-18 04:40:55.130914 | orchestrator | Wednesday 18 February 2026 04:40:44 +0000 (0:00:00.715) 0:01:10.667 **** 2026-02-18 04:40:55.130933 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:40:55.130951 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.130970 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.130989 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.131009 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:40:55.131029 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:40:55.131049 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:40:55.131068 | orchestrator | 2026-02-18 04:40:55.131087 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-18 04:40:55.131135 | orchestrator | Wednesday 18 February 2026 04:40:46 +0000 (0:00:01.969) 0:01:12.636 **** 2026-02-18 04:40:55.131156 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-18 04:40:55.131174 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-18 04:40:55.131193 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:40:55.131212 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-18 04:40:55.131230 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-18 04:40:55.131249 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-18 04:40:55.131262 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:55.131273 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:55.131284 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:55.131294 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.131305 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-18 04:40:55.131316 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.131326 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-18 04:40:55.131337 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.131348 | orchestrator | 2026-02-18 04:40:55.131359 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-18 04:40:55.131386 | orchestrator | Wednesday 18 February 2026 04:40:48 +0000 (0:00:01.471) 0:01:14.108 **** 2026-02-18 04:40:55.131398 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-18 04:40:55.131409 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-18 04:40:55.131419 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:55.131521 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:55.131545 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-18 04:40:55.131557 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.131568 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-18 04:40:55.131579 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.131590 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-18 04:40:55.131600 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:55.131611 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-18 04:40:55.131622 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.131633 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-18 04:40:55.131644 | orchestrator | 2026-02-18 04:40:55.131655 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-18 04:40:55.131664 | orchestrator | Wednesday 18 February 2026 04:40:49 +0000 (0:00:01.623) 0:01:15.731 **** 2026-02-18 04:40:55.131674 | orchestrator | [WARNING]: Skipped 2026-02-18 04:40:55.131685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-18 04:40:55.131694 | orchestrator | due to this access issue: 2026-02-18 04:40:55.131704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-18 04:40:55.131713 | orchestrator | not a directory 2026-02-18 04:40:55.131723 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 04:40:55.131732 | orchestrator | 2026-02-18 04:40:55.131742 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-18 04:40:55.131760 | orchestrator | Wednesday 18 February 2026 04:40:51 +0000 (0:00:01.136) 0:01:16.868 **** 2026-02-18 04:40:55.131770 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:40:55.131779 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:55.131789 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:55.131798 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:55.131808 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.131817 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.131827 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.131836 | orchestrator | 2026-02-18 04:40:55.131846 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-18 04:40:55.131856 | orchestrator | Wednesday 18 February 2026 04:40:51 +0000 (0:00:00.947) 0:01:17.816 **** 2026-02-18 04:40:55.131865 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:40:55.131875 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:40:55.131884 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:40:55.131894 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:40:55.131903 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:40:55.131912 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:40:55.131921 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:40:55.131931 | orchestrator | 2026-02-18 04:40:55.131940 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-18 04:40:55.131950 | orchestrator | Wednesday 18 February 2026 04:40:52 +0000 (0:00:00.897) 0:01:18.714 **** 2026-02-18 04:40:55.131983 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-18 04:40:56.720855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:40:56.720939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:40:56.720948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:40:56.720955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:40:56.720974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:40:56.720982 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:40:56.721006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-18 04:40:56.721025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:56.721033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:56.721040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:56.721047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:40:56.721054 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:40:56.721064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:40:56.721071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:40:56.721083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:56.721095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:58.635002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:58.635112 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-18 04:40:58.635132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:40:58.635163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:40:58.635196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-18 04:40:58.635209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:40:58.635238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:40:58.635250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-18 04:40:58.635262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:58.635273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:58.635290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:58.635308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 04:40:58.635320 | orchestrator | 2026-02-18 04:40:58.635332 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-18 04:40:58.635345 | orchestrator | Wednesday 18 February 2026 04:40:56 +0000 (0:00:03.853) 0:01:22.568 **** 2026-02-18 04:40:58.635356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-18 04:40:58.635367 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:40:58.635378 | orchestrator | 2026-02-18 04:40:58.635389 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-18 04:40:58.635401 | orchestrator | Wednesday 18 February 2026 04:40:57 +0000 (0:00:01.216) 0:01:23.784 **** 2026-02-18 04:40:58.635411 | orchestrator | 2026-02-18 04:40:58.635422 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-18 04:40:58.635529 | orchestrator | Wednesday 18 February 2026 04:40:58 +0000 (0:00:00.242) 0:01:24.027 **** 2026-02-18 04:40:58.635546 | orchestrator | 2026-02-18 04:40:58.635559 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-18 04:40:58.635571 | orchestrator | Wednesday 18 February 2026 04:40:58 +0000 (0:00:00.072) 0:01:24.099 **** 2026-02-18 04:40:58.635583 | orchestrator | 2026-02-18 04:40:58.635595 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-18 04:40:58.635607 | orchestrator | Wednesday 18 February 2026 04:40:58 +0000 (0:00:00.074) 0:01:24.173 **** 2026-02-18 04:40:58.635619 | orchestrator | 2026-02-18 04:40:58.635631 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-18 04:40:58.635643 | orchestrator | Wednesday 18 February 2026 04:40:58 +0000 (0:00:00.065) 0:01:24.239 **** 2026-02-18 04:40:58.635672 | orchestrator | 2026-02-18 04:40:58.635685 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-18 04:40:58.635697 | orchestrator | Wednesday 18 February 2026 04:40:58 +0000 (0:00:00.068) 0:01:24.307 **** 2026-02-18 04:40:58.635709 | orchestrator | 2026-02-18 04:40:58.635722 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-18 04:40:58.635743 | orchestrator | Wednesday 18 February 2026 04:40:58 +0000 (0:00:00.065) 0:01:24.373 **** 2026-02-18 04:42:48.623525 | orchestrator | 2026-02-18 04:42:48.623642 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-18 04:42:48.623661 | orchestrator | Wednesday 18 February 2026 04:40:58 +0000 (0:00:00.090) 0:01:24.464 **** 2026-02-18 04:42:48.623673 | orchestrator | changed: [testbed-manager] 2026-02-18 04:42:48.623686 | orchestrator | 2026-02-18 04:42:48.623697 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-18 04:42:48.623709 | orchestrator | Wednesday 18 February 2026 04:41:25 +0000 (0:00:27.065) 0:01:51.529 **** 2026-02-18 04:42:48.623720 | orchestrator | changed: [testbed-manager] 2026-02-18 04:42:48.623731 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:42:48.623742 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:42:48.623753 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:42:48.623763 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:42:48.623774 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:42:48.623786 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:42:48.623797 | orchestrator | 2026-02-18 04:42:48.623808 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-18 04:42:48.623819 | orchestrator | Wednesday 18 February 2026 04:41:38 +0000 (0:00:12.889) 0:02:04.419 **** 2026-02-18 04:42:48.623829 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:42:48.623865 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:42:48.623876 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:42:48.623887 | orchestrator | 2026-02-18 04:42:48.623898 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-18 04:42:48.623909 | orchestrator | Wednesday 18 February 2026 04:41:49 +0000 (0:00:10.572) 0:02:14.991 **** 2026-02-18 04:42:48.623920 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:42:48.623931 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:42:48.623941 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:42:48.623952 | orchestrator | 2026-02-18 04:42:48.623963 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-18 04:42:48.623974 | orchestrator | Wednesday 18 February 2026 04:41:59 +0000 (0:00:10.220) 0:02:25.212 **** 2026-02-18 04:42:48.623984 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:42:48.623995 | orchestrator | changed: [testbed-manager] 2026-02-18 04:42:48.624005 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:42:48.624016 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:42:48.624029 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:42:48.624047 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:42:48.624064 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:42:48.624077 | orchestrator | 2026-02-18 04:42:48.624091 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-18 04:42:48.624103 | orchestrator | Wednesday 18 February 2026 04:42:13 +0000 (0:00:13.865) 0:02:39.077 **** 2026-02-18 04:42:48.624115 | orchestrator | changed: [testbed-manager] 2026-02-18 04:42:48.624128 | orchestrator | 2026-02-18 04:42:48.624140 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-18 04:42:48.624153 | orchestrator | Wednesday 18 February 2026 04:42:26 +0000 (0:00:13.612) 0:02:52.690 **** 2026-02-18 04:42:48.624165 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:42:48.624192 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:42:48.624204 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:42:48.624217 | orchestrator | 2026-02-18 04:42:48.624256 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-18 04:42:48.624268 | orchestrator | Wednesday 18 February 2026 04:42:36 +0000 (0:00:10.144) 0:03:02.834 **** 2026-02-18 04:42:48.624281 | orchestrator | changed: [testbed-manager] 2026-02-18 04:42:48.624293 | orchestrator | 2026-02-18 04:42:48.624305 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-18 04:42:48.624317 | orchestrator | Wednesday 18 February 2026 04:42:42 +0000 (0:00:05.801) 0:03:08.636 **** 2026-02-18 04:42:48.624329 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:42:48.624342 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:42:48.624354 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:42:48.624366 | orchestrator | 2026-02-18 04:42:48.624379 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:42:48.624392 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-18 04:42:48.624407 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-18 04:42:48.624418 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-18 04:42:48.624429 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-18 04:42:48.624440 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-18 04:42:48.624451 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-18 04:42:48.624470 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-18 04:42:48.624481 | orchestrator | 2026-02-18 04:42:48.624492 | orchestrator | 2026-02-18 04:42:48.624503 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:42:48.624514 | orchestrator | Wednesday 18 February 2026 04:42:48 +0000 (0:00:05.315) 0:03:13.951 **** 2026-02-18 04:42:48.624525 | orchestrator | =============================================================================== 2026-02-18 04:42:48.624536 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 27.07s 2026-02-18 04:42:48.624571 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.69s 2026-02-18 04:42:48.624583 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.45s 2026-02-18 04:42:48.624593 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.87s 2026-02-18 04:42:48.624604 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.61s 2026-02-18 04:42:48.624615 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.89s 2026-02-18 04:42:48.624626 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.57s 2026-02-18 04:42:48.624636 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.22s 2026-02-18 04:42:48.624647 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.14s 2026-02-18 04:42:48.624658 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.99s 2026-02-18 04:42:48.624668 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.80s 2026-02-18 04:42:48.624679 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.32s 2026-02-18 04:42:48.624689 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.20s 2026-02-18 04:42:48.624700 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.85s 2026-02-18 04:42:48.624711 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.99s 2026-02-18 04:42:48.624721 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.47s 2026-02-18 04:42:48.624732 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.19s 2026-02-18 04:42:48.624743 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.97s 2026-02-18 04:42:48.624753 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.78s 2026-02-18 04:42:48.624764 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.78s 2026-02-18 04:42:52.146353 | orchestrator | 2026-02-18 04:42:52 | INFO  | Task 253f909a-6be1-4a6c-aa93-f926c6501daf (grafana) was prepared for execution. 2026-02-18 04:42:52.146474 | orchestrator | 2026-02-18 04:42:52 | INFO  | It takes a moment until task 253f909a-6be1-4a6c-aa93-f926c6501daf (grafana) has been started and output is visible here. 2026-02-18 04:43:02.149028 | orchestrator | 2026-02-18 04:43:02.149136 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:43:02.149151 | orchestrator | 2026-02-18 04:43:02.149162 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:43:02.149225 | orchestrator | Wednesday 18 February 2026 04:42:56 +0000 (0:00:00.264) 0:00:00.264 **** 2026-02-18 04:43:02.149239 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:43:02.149250 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:43:02.149261 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:43:02.149270 | orchestrator | 2026-02-18 04:43:02.149280 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:43:02.149290 | orchestrator | Wednesday 18 February 2026 04:42:56 +0000 (0:00:00.322) 0:00:00.587 **** 2026-02-18 04:43:02.149300 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-18 04:43:02.149309 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-18 04:43:02.149338 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-18 04:43:02.149349 | orchestrator | 2026-02-18 04:43:02.149358 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-18 04:43:02.149368 | orchestrator | 2026-02-18 04:43:02.149377 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-18 04:43:02.149387 | orchestrator | Wednesday 18 February 2026 04:42:57 +0000 (0:00:00.452) 0:00:01.039 **** 2026-02-18 04:43:02.149397 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:43:02.149408 | orchestrator | 2026-02-18 04:43:02.149417 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-18 04:43:02.149427 | orchestrator | Wednesday 18 February 2026 04:42:57 +0000 (0:00:00.552) 0:00:01.592 **** 2026-02-18 04:43:02.149439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:02.149454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:02.149464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:02.149474 | orchestrator | 2026-02-18 04:43:02.149484 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-18 04:43:02.149494 | orchestrator | Wednesday 18 February 2026 04:42:58 +0000 (0:00:00.880) 0:00:02.473 **** 2026-02-18 04:43:02.149504 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-18 04:43:02.149514 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-18 04:43:02.149524 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:43:02.149533 | orchestrator | 2026-02-18 04:43:02.149543 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-18 04:43:02.149553 | orchestrator | Wednesday 18 February 2026 04:42:59 +0000 (0:00:00.818) 0:00:03.291 **** 2026-02-18 04:43:02.149565 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:43:02.149576 | orchestrator | 2026-02-18 04:43:02.149588 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-18 04:43:02.149607 | orchestrator | Wednesday 18 February 2026 04:43:00 +0000 (0:00:00.586) 0:00:03.878 **** 2026-02-18 04:43:02.149641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:02.149654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:02.149666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:02.149678 | orchestrator | 2026-02-18 04:43:02.149689 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-18 04:43:02.149700 | orchestrator | Wednesday 18 February 2026 04:43:01 +0000 (0:00:01.360) 0:00:05.239 **** 2026-02-18 04:43:02.149712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 04:43:02.149724 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:43:02.149736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 04:43:02.149748 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:43:02.149778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 04:43:08.564578 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:43:08.564689 | orchestrator | 2026-02-18 04:43:08.564705 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-18 04:43:08.564718 | orchestrator | Wednesday 18 February 2026 04:43:02 +0000 (0:00:00.582) 0:00:05.821 **** 2026-02-18 04:43:08.564733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 04:43:08.564750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 04:43:08.564761 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:43:08.564773 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:43:08.564785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-18 04:43:08.564797 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:43:08.564808 | orchestrator | 2026-02-18 04:43:08.564819 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-18 04:43:08.564830 | orchestrator | Wednesday 18 February 2026 04:43:02 +0000 (0:00:00.594) 0:00:06.416 **** 2026-02-18 04:43:08.564841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:08.564876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:08.564935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:08.564949 | orchestrator | 2026-02-18 04:43:08.564960 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-18 04:43:08.564972 | orchestrator | Wednesday 18 February 2026 04:43:03 +0000 (0:00:01.159) 0:00:07.576 **** 2026-02-18 04:43:08.564983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:08.564994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:08.565006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:43:08.565025 | orchestrator | 2026-02-18 04:43:08.565037 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-18 04:43:08.565048 | orchestrator | Wednesday 18 February 2026 04:43:05 +0000 (0:00:01.535) 0:00:09.111 **** 2026-02-18 04:43:08.565058 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:43:08.565069 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:43:08.565080 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:43:08.565091 | orchestrator | 2026-02-18 04:43:08.565101 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-18 04:43:08.565112 | orchestrator | Wednesday 18 February 2026 04:43:05 +0000 (0:00:00.328) 0:00:09.439 **** 2026-02-18 04:43:08.565123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-18 04:43:08.565135 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-18 04:43:08.565145 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-18 04:43:08.565179 | orchestrator | 2026-02-18 04:43:08.565191 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-18 04:43:08.565202 | orchestrator | Wednesday 18 February 2026 04:43:06 +0000 (0:00:01.187) 0:00:10.627 **** 2026-02-18 04:43:08.565213 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-18 04:43:08.565224 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-18 04:43:08.565236 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-18 04:43:08.565247 | orchestrator | 2026-02-18 04:43:08.565263 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-18 04:43:08.565282 | orchestrator | Wednesday 18 February 2026 04:43:08 +0000 (0:00:01.605) 0:00:12.233 **** 2026-02-18 04:43:15.108228 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:43:15.108341 | orchestrator | 2026-02-18 04:43:15.108357 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-18 04:43:15.108370 | orchestrator | Wednesday 18 February 2026 04:43:09 +0000 (0:00:00.834) 0:00:13.067 **** 2026-02-18 04:43:15.108381 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-18 04:43:15.108394 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-18 04:43:15.108405 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:43:15.108417 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:43:15.108428 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:43:15.108439 | orchestrator | 2026-02-18 04:43:15.108450 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-18 04:43:15.108461 | orchestrator | Wednesday 18 February 2026 04:43:10 +0000 (0:00:00.693) 0:00:13.761 **** 2026-02-18 04:43:15.108474 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:43:15.108485 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:43:15.108496 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:43:15.108508 | orchestrator | 2026-02-18 04:43:15.108519 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-18 04:43:15.108530 | orchestrator | Wednesday 18 February 2026 04:43:10 +0000 (0:00:00.366) 0:00:14.127 **** 2026-02-18 04:43:15.108544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318593, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.046174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318593, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.046174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318593, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.046174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319391, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2274256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319391, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2274256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319391, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2274256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318598, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0501742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318598, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0501742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318598, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0501742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319394, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2302003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319394, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2302003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:15.108770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319394, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2302003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.790826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318937, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.1395755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.790978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318937, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.1395755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318937, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.1395755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319383, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2251778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319383, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2251778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319383, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2251778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318590, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0444653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318590, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0444653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318590, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0444653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318595, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0471742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318595, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0471742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318595, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0471742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:18.791301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318599, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0501742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318599, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0501742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318599, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0501742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319325, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2091775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319325, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2091775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319325, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2091775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319389, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2265077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319389, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2265077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319389, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2265077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318597, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0481741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318597, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0481741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318597, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0481741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319382, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2245405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:22.523723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319382, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2245405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319382, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2245405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319323, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2081773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319323, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2081773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319323, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2081773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318935, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.138176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318935, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.138176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318935, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.138176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318932, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.1376579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318932, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.1376579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318932, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.1376579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319377, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2242215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319377, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2242215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:26.625833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319377, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2242215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318600, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0511742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318600, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0511742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318600, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.0511742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319387, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2251778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319387, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2251778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319387, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2251778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319746, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.362653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319746, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.362653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319746, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.362653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319604, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.319193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319604, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.319193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319604, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.319193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:30.474754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319408, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.233178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319408, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.233178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319408, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.233178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319626, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3216348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319626, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3216348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319626, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3216348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319401, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.230522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319401, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.230522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319401, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.230522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319716, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.353566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319716, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.353566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319716, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.353566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319629, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3281798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:34.467982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319629, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3281798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319629, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3281798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319718, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3552463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319718, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3552463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319718, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3552463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319737, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3605082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319737, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3605082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319737, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3605082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319714, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3528488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319714, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3528488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319714, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3528488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319618, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3212152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.175996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319618, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3212152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:38.176016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319618, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3212152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.768840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319599, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3149078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.768933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319599, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3149078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.768965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319599, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3149078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.768987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319615, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.319193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.768997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319615, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.319193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319615, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.319193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319410, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.235178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319410, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.235178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319623, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3216348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319410, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.235178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319623, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3216348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319730, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3595738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:41.769143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319730, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3595738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319623, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3216348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319726, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3577237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319726, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3577237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319730, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3595738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319403, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2311854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319403, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2311854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319726, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3577237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319406, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.232178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319406, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.232178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319403, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.2311854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319711, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3525372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319711, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3525372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:43:45.928466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319406, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.232178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:45:22.821645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319723, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3556337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:45:22.821947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319723, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3556337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:45:22.822004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319711, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3525372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:45:22.822095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319723, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771382469.3556337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-18 04:45:22.822118 | orchestrator | 2026-02-18 04:45:22.822143 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-18 04:45:22.822167 | orchestrator | Wednesday 18 February 2026 04:43:47 +0000 (0:00:37.327) 0:00:51.454 **** 2026-02-18 04:45:22.822189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:45:22.822238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:45:22.822292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-18 04:45:22.822312 | orchestrator | 2026-02-18 04:45:22.822337 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-18 04:45:22.822366 | orchestrator | Wednesday 18 February 2026 04:43:48 +0000 (0:00:00.978) 0:00:52.432 **** 2026-02-18 04:45:22.822386 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:45:22.822407 | orchestrator | 2026-02-18 04:45:22.822427 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-18 04:45:22.822447 | orchestrator | Wednesday 18 February 2026 04:43:51 +0000 (0:00:02.348) 0:00:54.781 **** 2026-02-18 04:45:22.822466 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:45:22.822485 | orchestrator | 2026-02-18 04:45:22.822505 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-18 04:45:22.822524 | orchestrator | Wednesday 18 February 2026 04:43:53 +0000 (0:00:02.141) 0:00:56.923 **** 2026-02-18 04:45:22.822545 | orchestrator | 2026-02-18 04:45:22.822576 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-18 04:45:22.822598 | orchestrator | Wednesday 18 February 2026 04:43:53 +0000 (0:00:00.093) 0:00:57.017 **** 2026-02-18 04:45:22.822619 | orchestrator | 2026-02-18 04:45:22.822638 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-18 04:45:22.822658 | orchestrator | Wednesday 18 February 2026 04:43:53 +0000 (0:00:00.081) 0:00:57.098 **** 2026-02-18 04:45:22.822679 | orchestrator | 2026-02-18 04:45:22.822698 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-18 04:45:22.822710 | orchestrator | Wednesday 18 February 2026 04:43:53 +0000 (0:00:00.082) 0:00:57.181 **** 2026-02-18 04:45:22.822722 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:45:22.822732 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:45:22.822743 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:45:22.822754 | orchestrator | 2026-02-18 04:45:22.822764 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-18 04:45:22.822803 | orchestrator | Wednesday 18 February 2026 04:43:55 +0000 (0:00:02.276) 0:00:59.458 **** 2026-02-18 04:45:22.822814 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:45:22.822825 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:45:22.822836 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-18 04:45:22.822847 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-18 04:45:22.822858 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-18 04:45:22.822869 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-18 04:45:22.822892 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:45:22.822904 | orchestrator | 2026-02-18 04:45:22.822914 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-18 04:45:22.822925 | orchestrator | Wednesday 18 February 2026 04:44:46 +0000 (0:00:50.569) 0:01:50.027 **** 2026-02-18 04:45:22.822935 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:45:22.822946 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:45:22.822957 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:45:22.822967 | orchestrator | 2026-02-18 04:45:22.822978 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-18 04:45:22.822989 | orchestrator | Wednesday 18 February 2026 04:45:17 +0000 (0:00:31.302) 0:02:21.329 **** 2026-02-18 04:45:22.822999 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:45:22.823010 | orchestrator | 2026-02-18 04:45:22.823021 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-18 04:45:22.823037 | orchestrator | Wednesday 18 February 2026 04:45:19 +0000 (0:00:02.184) 0:02:23.514 **** 2026-02-18 04:45:22.823056 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:45:22.823074 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:45:22.823091 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:45:22.823109 | orchestrator | 2026-02-18 04:45:22.823126 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-18 04:45:22.823144 | orchestrator | Wednesday 18 February 2026 04:45:20 +0000 (0:00:00.333) 0:02:23.848 **** 2026-02-18 04:45:22.823165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-18 04:45:22.823206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-18 04:45:23.488088 | orchestrator | 2026-02-18 04:45:23.488169 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-18 04:45:23.488180 | orchestrator | Wednesday 18 February 2026 04:45:22 +0000 (0:00:02.638) 0:02:26.487 **** 2026-02-18 04:45:23.488186 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:45:23.488193 | orchestrator | 2026-02-18 04:45:23.488199 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:45:23.488206 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:45:23.488214 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:45:23.488221 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-18 04:45:23.488226 | orchestrator | 2026-02-18 04:45:23.488232 | orchestrator | 2026-02-18 04:45:23.488238 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:45:23.488244 | orchestrator | Wednesday 18 February 2026 04:45:23 +0000 (0:00:00.313) 0:02:26.800 **** 2026-02-18 04:45:23.488249 | orchestrator | =============================================================================== 2026-02-18 04:45:23.488255 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.57s 2026-02-18 04:45:23.488261 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.33s 2026-02-18 04:45:23.488267 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.30s 2026-02-18 04:45:23.488288 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.64s 2026-02-18 04:45:23.488311 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.35s 2026-02-18 04:45:23.488317 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.28s 2026-02-18 04:45:23.488323 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2026-02-18 04:45:23.488329 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.14s 2026-02-18 04:45:23.488335 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.61s 2026-02-18 04:45:23.488340 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.54s 2026-02-18 04:45:23.488346 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.36s 2026-02-18 04:45:23.488352 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.19s 2026-02-18 04:45:23.488358 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.16s 2026-02-18 04:45:23.488363 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.98s 2026-02-18 04:45:23.488369 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.88s 2026-02-18 04:45:23.488375 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.83s 2026-02-18 04:45:23.488380 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2026-02-18 04:45:23.488386 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2026-02-18 04:45:23.488392 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.59s 2026-02-18 04:45:23.488397 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.59s 2026-02-18 04:45:23.818521 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-18 04:45:23.828259 | orchestrator | + set -e 2026-02-18 04:45:23.828353 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 04:45:23.828381 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 04:45:23.828401 | orchestrator | ++ INTERACTIVE=false 2026-02-18 04:45:23.828420 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 04:45:23.828441 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 04:45:23.828462 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 04:45:23.828481 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 04:45:23.828502 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 04:45:23.828522 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 04:45:23.828538 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 04:45:23.828549 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 04:45:23.828560 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 04:45:23.828570 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 04:45:23.828581 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 04:45:23.828593 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 04:45:23.828604 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 04:45:23.828615 | orchestrator | ++ export ARA=false 2026-02-18 04:45:23.828625 | orchestrator | ++ ARA=false 2026-02-18 04:45:23.828636 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 04:45:23.828647 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 04:45:23.828657 | orchestrator | ++ export TEMPEST=false 2026-02-18 04:45:23.828668 | orchestrator | ++ TEMPEST=false 2026-02-18 04:45:23.828678 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 04:45:23.828689 | orchestrator | ++ IS_ZUUL=true 2026-02-18 04:45:23.828699 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 04:45:23.828710 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 04:45:23.828721 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 04:45:23.828731 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 04:45:23.828741 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 04:45:23.828752 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 04:45:23.828762 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 04:45:23.828817 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 04:45:23.828828 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 04:45:23.828839 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 04:45:23.829535 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-18 04:45:23.895204 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 04:45:23.895323 | orchestrator | + osism apply clusterapi 2026-02-18 04:45:25.980337 | orchestrator | 2026-02-18 04:45:25 | INFO  | Task e6daa0ab-0efc-4201-b6c3-3093b3a7943e (clusterapi) was prepared for execution. 2026-02-18 04:45:25.980467 | orchestrator | 2026-02-18 04:45:25 | INFO  | It takes a moment until task e6daa0ab-0efc-4201-b6c3-3093b3a7943e (clusterapi) has been started and output is visible here. 2026-02-18 04:46:27.010165 | orchestrator | 2026-02-18 04:46:27.010284 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-18 04:46:27.010301 | orchestrator | 2026-02-18 04:46:27.010313 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-18 04:46:27.010324 | orchestrator | Wednesday 18 February 2026 04:45:30 +0000 (0:00:00.196) 0:00:00.196 **** 2026-02-18 04:46:27.010336 | orchestrator | included: cert_manager for testbed-manager 2026-02-18 04:46:27.010347 | orchestrator | 2026-02-18 04:46:27.010358 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-18 04:46:27.010369 | orchestrator | Wednesday 18 February 2026 04:45:30 +0000 (0:00:00.267) 0:00:00.463 **** 2026-02-18 04:46:27.010380 | orchestrator | changed: [testbed-manager] 2026-02-18 04:46:27.010392 | orchestrator | 2026-02-18 04:46:27.010403 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-18 04:46:27.010414 | orchestrator | Wednesday 18 February 2026 04:45:36 +0000 (0:00:05.494) 0:00:05.958 **** 2026-02-18 04:46:27.010425 | orchestrator | changed: [testbed-manager] 2026-02-18 04:46:27.010436 | orchestrator | 2026-02-18 04:46:27.010447 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-18 04:46:27.010457 | orchestrator | 2026-02-18 04:46:27.010468 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-18 04:46:27.010479 | orchestrator | Wednesday 18 February 2026 04:46:07 +0000 (0:00:31.330) 0:00:37.289 **** 2026-02-18 04:46:27.010490 | orchestrator | ok: [testbed-manager] 2026-02-18 04:46:27.010501 | orchestrator | 2026-02-18 04:46:27.010512 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-18 04:46:27.010523 | orchestrator | Wednesday 18 February 2026 04:46:08 +0000 (0:00:01.106) 0:00:38.396 **** 2026-02-18 04:46:27.010533 | orchestrator | ok: [testbed-manager] 2026-02-18 04:46:27.010544 | orchestrator | 2026-02-18 04:46:27.010555 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-18 04:46:27.010566 | orchestrator | Wednesday 18 February 2026 04:46:08 +0000 (0:00:00.149) 0:00:38.545 **** 2026-02-18 04:46:27.010594 | orchestrator | ok: [testbed-manager] 2026-02-18 04:46:27.010606 | orchestrator | 2026-02-18 04:46:27.010617 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-18 04:46:27.010683 | orchestrator | Wednesday 18 February 2026 04:46:24 +0000 (0:00:15.620) 0:00:54.166 **** 2026-02-18 04:46:27.010697 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:46:27.010710 | orchestrator | 2026-02-18 04:46:27.010722 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-18 04:46:27.010735 | orchestrator | Wednesday 18 February 2026 04:46:24 +0000 (0:00:00.141) 0:00:54.308 **** 2026-02-18 04:46:27.010745 | orchestrator | changed: [testbed-manager] 2026-02-18 04:46:27.010756 | orchestrator | 2026-02-18 04:46:27.010767 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:46:27.010779 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 04:46:27.010790 | orchestrator | 2026-02-18 04:46:27.010801 | orchestrator | 2026-02-18 04:46:27.010812 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:46:27.010823 | orchestrator | Wednesday 18 February 2026 04:46:26 +0000 (0:00:02.250) 0:00:56.559 **** 2026-02-18 04:46:27.010834 | orchestrator | =============================================================================== 2026-02-18 04:46:27.010845 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 31.33s 2026-02-18 04:46:27.010856 | orchestrator | Initialize the CAPI management cluster --------------------------------- 15.62s 2026-02-18 04:46:27.010866 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.49s 2026-02-18 04:46:27.010899 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.25s 2026-02-18 04:46:27.010911 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.11s 2026-02-18 04:46:27.010921 | orchestrator | Include cert_manager role ----------------------------------------------- 0.27s 2026-02-18 04:46:27.010932 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.15s 2026-02-18 04:46:27.010943 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-02-18 04:46:27.353504 | orchestrator | + osism apply magnum 2026-02-18 04:46:29.420946 | orchestrator | 2026-02-18 04:46:29 | INFO  | Task 5ca4d852-471c-4b9c-89a7-75f1bc55c04f (magnum) was prepared for execution. 2026-02-18 04:46:29.421039 | orchestrator | 2026-02-18 04:46:29 | INFO  | It takes a moment until task 5ca4d852-471c-4b9c-89a7-75f1bc55c04f (magnum) has been started and output is visible here. 2026-02-18 04:47:10.569215 | orchestrator | 2026-02-18 04:47:10.569304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:47:10.569314 | orchestrator | 2026-02-18 04:47:10.569320 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:47:10.569327 | orchestrator | Wednesday 18 February 2026 04:46:33 +0000 (0:00:00.266) 0:00:00.266 **** 2026-02-18 04:47:10.569333 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:47:10.569340 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:47:10.569346 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:47:10.569352 | orchestrator | 2026-02-18 04:47:10.569358 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:47:10.569364 | orchestrator | Wednesday 18 February 2026 04:46:33 +0000 (0:00:00.312) 0:00:00.579 **** 2026-02-18 04:47:10.569370 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-18 04:47:10.569376 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-18 04:47:10.569382 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-18 04:47:10.569388 | orchestrator | 2026-02-18 04:47:10.569393 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-18 04:47:10.569399 | orchestrator | 2026-02-18 04:47:10.569405 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-18 04:47:10.569411 | orchestrator | Wednesday 18 February 2026 04:46:34 +0000 (0:00:00.465) 0:00:01.044 **** 2026-02-18 04:47:10.569416 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:47:10.569423 | orchestrator | 2026-02-18 04:47:10.569429 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-18 04:47:10.569435 | orchestrator | Wednesday 18 February 2026 04:46:34 +0000 (0:00:00.585) 0:00:01.629 **** 2026-02-18 04:47:10.569441 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-18 04:47:10.569447 | orchestrator | 2026-02-18 04:47:10.569453 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-18 04:47:10.569458 | orchestrator | Wednesday 18 February 2026 04:46:38 +0000 (0:00:03.389) 0:00:05.018 **** 2026-02-18 04:47:10.569464 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-18 04:47:10.569470 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-18 04:47:10.569476 | orchestrator | 2026-02-18 04:47:10.569482 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-18 04:47:10.569487 | orchestrator | Wednesday 18 February 2026 04:46:44 +0000 (0:00:06.219) 0:00:11.238 **** 2026-02-18 04:47:10.569493 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-18 04:47:10.569499 | orchestrator | 2026-02-18 04:47:10.569505 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-18 04:47:10.569510 | orchestrator | Wednesday 18 February 2026 04:46:47 +0000 (0:00:03.385) 0:00:14.624 **** 2026-02-18 04:47:10.569570 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-18 04:47:10.569579 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-18 04:47:10.569585 | orchestrator | 2026-02-18 04:47:10.569602 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-18 04:47:10.569608 | orchestrator | Wednesday 18 February 2026 04:46:51 +0000 (0:00:03.720) 0:00:18.344 **** 2026-02-18 04:47:10.569614 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-18 04:47:10.569620 | orchestrator | 2026-02-18 04:47:10.569626 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-18 04:47:10.569631 | orchestrator | Wednesday 18 February 2026 04:46:54 +0000 (0:00:03.151) 0:00:21.496 **** 2026-02-18 04:47:10.569637 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-18 04:47:10.569643 | orchestrator | 2026-02-18 04:47:10.569648 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-18 04:47:10.569654 | orchestrator | Wednesday 18 February 2026 04:46:58 +0000 (0:00:03.759) 0:00:25.256 **** 2026-02-18 04:47:10.569660 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:47:10.569665 | orchestrator | 2026-02-18 04:47:10.569671 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-18 04:47:10.569685 | orchestrator | Wednesday 18 February 2026 04:47:01 +0000 (0:00:03.222) 0:00:28.479 **** 2026-02-18 04:47:10.569691 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:47:10.569697 | orchestrator | 2026-02-18 04:47:10.569703 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-18 04:47:10.569709 | orchestrator | Wednesday 18 February 2026 04:47:05 +0000 (0:00:03.834) 0:00:32.313 **** 2026-02-18 04:47:10.569714 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:47:10.569720 | orchestrator | 2026-02-18 04:47:10.569726 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-18 04:47:10.569732 | orchestrator | Wednesday 18 February 2026 04:47:08 +0000 (0:00:03.317) 0:00:35.630 **** 2026-02-18 04:47:10.569753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:10.569763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:10.569770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:10.569784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:10.569794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:10.569805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:17.922661 | orchestrator | 2026-02-18 04:47:17.922769 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-18 04:47:17.922784 | orchestrator | Wednesday 18 February 2026 04:47:10 +0000 (0:00:01.608) 0:00:37.238 **** 2026-02-18 04:47:17.922795 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:47:17.922806 | orchestrator | 2026-02-18 04:47:17.922816 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-18 04:47:17.922826 | orchestrator | Wednesday 18 February 2026 04:47:10 +0000 (0:00:00.180) 0:00:37.419 **** 2026-02-18 04:47:17.922836 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:47:17.922846 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:47:17.922855 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:47:17.922865 | orchestrator | 2026-02-18 04:47:17.922874 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-18 04:47:17.922905 | orchestrator | Wednesday 18 February 2026 04:47:11 +0000 (0:00:00.300) 0:00:37.719 **** 2026-02-18 04:47:17.922915 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 04:47:17.922925 | orchestrator | 2026-02-18 04:47:17.922935 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-18 04:47:17.922944 | orchestrator | Wednesday 18 February 2026 04:47:11 +0000 (0:00:00.895) 0:00:38.615 **** 2026-02-18 04:47:17.922956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:17.922986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:17.923006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:17.923045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:17.923067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:17.923098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:17.923114 | orchestrator | 2026-02-18 04:47:17.923131 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-18 04:47:17.923148 | orchestrator | Wednesday 18 February 2026 04:47:14 +0000 (0:00:02.350) 0:00:40.965 **** 2026-02-18 04:47:17.923166 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:47:17.923184 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:47:17.923201 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:47:17.923218 | orchestrator | 2026-02-18 04:47:17.923245 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-18 04:47:17.923262 | orchestrator | Wednesday 18 February 2026 04:47:14 +0000 (0:00:00.523) 0:00:41.489 **** 2026-02-18 04:47:17.923280 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:47:17.923298 | orchestrator | 2026-02-18 04:47:17.923316 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-18 04:47:17.923334 | orchestrator | Wednesday 18 February 2026 04:47:15 +0000 (0:00:00.561) 0:00:42.050 **** 2026-02-18 04:47:17.923352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:17.923381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:18.866388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:18.866491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:18.866613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:18.866632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:18.866645 | orchestrator | 2026-02-18 04:47:18.866658 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-18 04:47:18.866670 | orchestrator | Wednesday 18 February 2026 04:47:17 +0000 (0:00:02.558) 0:00:44.608 **** 2026-02-18 04:47:18.866701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:18.866735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:18.866747 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:47:18.866765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:18.866777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:18.866788 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:47:18.866800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:18.866827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:22.504917 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:47:22.505043 | orchestrator | 2026-02-18 04:47:22.505067 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-18 04:47:22.505084 | orchestrator | Wednesday 18 February 2026 04:47:18 +0000 (0:00:00.939) 0:00:45.548 **** 2026-02-18 04:47:22.505103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:22.505139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:22.505151 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:47:22.505161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:22.505190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:22.505200 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:47:22.505227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:22.505238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:22.505247 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:47:22.505256 | orchestrator | 2026-02-18 04:47:22.505265 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-18 04:47:22.505274 | orchestrator | Wednesday 18 February 2026 04:47:19 +0000 (0:00:00.913) 0:00:46.462 **** 2026-02-18 04:47:22.505288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:22.505299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:22.505322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:28.528138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:28.528252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:28.528285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:28.528299 | orchestrator | 2026-02-18 04:47:28.528313 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-18 04:47:28.528347 | orchestrator | Wednesday 18 February 2026 04:47:22 +0000 (0:00:02.724) 0:00:49.187 **** 2026-02-18 04:47:28.528359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:28.528389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:28.528402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:28.528419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:28.528431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:28.528450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:28.528462 | orchestrator | 2026-02-18 04:47:28.528473 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-18 04:47:28.528484 | orchestrator | Wednesday 18 February 2026 04:47:27 +0000 (0:00:05.358) 0:00:54.545 **** 2026-02-18 04:47:28.528571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:30.550822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:30.550925 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:47:30.550960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:30.550993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:30.551005 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:47:30.551015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-18 04:47:30.551043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 04:47:30.551054 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:47:30.551064 | orchestrator | 2026-02-18 04:47:30.551074 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-18 04:47:30.551085 | orchestrator | Wednesday 18 February 2026 04:47:28 +0000 (0:00:00.669) 0:00:55.215 **** 2026-02-18 04:47:30.551096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:30.551112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:30.551128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-18 04:47:30.551138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:47:30.551157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:48:27.167908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-18 04:48:27.168093 | orchestrator | 2026-02-18 04:48:27.168141 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-18 04:48:27.168164 | orchestrator | Wednesday 18 February 2026 04:47:30 +0000 (0:00:02.016) 0:00:57.231 **** 2026-02-18 04:48:27.168181 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:48:27.168199 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:48:27.168216 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:48:27.168232 | orchestrator | 2026-02-18 04:48:27.168248 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-18 04:48:27.168264 | orchestrator | Wednesday 18 February 2026 04:47:31 +0000 (0:00:00.566) 0:00:57.798 **** 2026-02-18 04:48:27.168280 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:48:27.168297 | orchestrator | 2026-02-18 04:48:27.168315 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-18 04:48:27.168333 | orchestrator | Wednesday 18 February 2026 04:47:33 +0000 (0:00:02.238) 0:01:00.037 **** 2026-02-18 04:48:27.168352 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:48:27.168370 | orchestrator | 2026-02-18 04:48:27.168387 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-18 04:48:27.168461 | orchestrator | Wednesday 18 February 2026 04:47:35 +0000 (0:00:02.263) 0:01:02.300 **** 2026-02-18 04:48:27.168483 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:48:27.168500 | orchestrator | 2026-02-18 04:48:27.168519 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-18 04:48:27.168538 | orchestrator | Wednesday 18 February 2026 04:47:51 +0000 (0:00:15.437) 0:01:17.737 **** 2026-02-18 04:48:27.168556 | orchestrator | 2026-02-18 04:48:27.168576 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-18 04:48:27.168595 | orchestrator | Wednesday 18 February 2026 04:47:51 +0000 (0:00:00.070) 0:01:17.808 **** 2026-02-18 04:48:27.168613 | orchestrator | 2026-02-18 04:48:27.168632 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-18 04:48:27.168651 | orchestrator | Wednesday 18 February 2026 04:47:51 +0000 (0:00:00.072) 0:01:17.881 **** 2026-02-18 04:48:27.168670 | orchestrator | 2026-02-18 04:48:27.168689 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-18 04:48:27.168708 | orchestrator | Wednesday 18 February 2026 04:47:51 +0000 (0:00:00.072) 0:01:17.953 **** 2026-02-18 04:48:27.168727 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:48:27.168747 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:48:27.168764 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:48:27.168782 | orchestrator | 2026-02-18 04:48:27.168800 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-18 04:48:27.168817 | orchestrator | Wednesday 18 February 2026 04:48:10 +0000 (0:00:19.536) 0:01:37.490 **** 2026-02-18 04:48:27.168834 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:48:27.168852 | orchestrator | changed: [testbed-node-2] 2026-02-18 04:48:27.168868 | orchestrator | changed: [testbed-node-1] 2026-02-18 04:48:27.168885 | orchestrator | 2026-02-18 04:48:27.168902 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:48:27.168920 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 04:48:27.168940 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:48:27.168957 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-18 04:48:27.168972 | orchestrator | 2026-02-18 04:48:27.168989 | orchestrator | 2026-02-18 04:48:27.169005 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:48:27.169022 | orchestrator | Wednesday 18 February 2026 04:48:26 +0000 (0:00:16.004) 0:01:53.494 **** 2026-02-18 04:48:27.169040 | orchestrator | =============================================================================== 2026-02-18 04:48:27.169078 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.54s 2026-02-18 04:48:27.169096 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.00s 2026-02-18 04:48:27.169114 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.44s 2026-02-18 04:48:27.169133 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.22s 2026-02-18 04:48:27.169151 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.36s 2026-02-18 04:48:27.169169 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.83s 2026-02-18 04:48:27.169187 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.76s 2026-02-18 04:48:27.169234 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.72s 2026-02-18 04:48:27.169254 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.39s 2026-02-18 04:48:27.169271 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.39s 2026-02-18 04:48:27.169289 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.32s 2026-02-18 04:48:27.169306 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.22s 2026-02-18 04:48:27.169324 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.15s 2026-02-18 04:48:27.169341 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.72s 2026-02-18 04:48:27.169359 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.56s 2026-02-18 04:48:27.169378 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.35s 2026-02-18 04:48:27.169422 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.26s 2026-02-18 04:48:27.169456 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.24s 2026-02-18 04:48:27.169476 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.02s 2026-02-18 04:48:27.169495 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.61s 2026-02-18 04:48:27.947270 | orchestrator | ok: Runtime: 1:43:44.243078 2026-02-18 04:48:28.214721 | 2026-02-18 04:48:28.214903 | TASK [Deploy in a nutshell] 2026-02-18 04:48:28.754987 | orchestrator | skipping: Conditional result was False 2026-02-18 04:48:28.778108 | 2026-02-18 04:48:28.778279 | TASK [Bootstrap services] 2026-02-18 04:48:29.458715 | orchestrator | 2026-02-18 04:48:29.458828 | orchestrator | # BOOTSTRAP 2026-02-18 04:48:29.458836 | orchestrator | 2026-02-18 04:48:29.458842 | orchestrator | + set -e 2026-02-18 04:48:29.458847 | orchestrator | + echo 2026-02-18 04:48:29.458852 | orchestrator | + echo '# BOOTSTRAP' 2026-02-18 04:48:29.458860 | orchestrator | + echo 2026-02-18 04:48:29.458882 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-18 04:48:29.469069 | orchestrator | + set -e 2026-02-18 04:48:29.469117 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-18 04:48:31.538899 | orchestrator | 2026-02-18 04:48:31 | INFO  | It takes a moment until task 6c03e16f-0225-47e8-9896-48874aec9032 (flavor-manager) has been started and output is visible here. 2026-02-18 04:48:39.196450 | orchestrator | 2026-02-18 04:48:34 | INFO  | Flavor SCS-1L-1 created 2026-02-18 04:48:39.196586 | orchestrator | 2026-02-18 04:48:34 | INFO  | Flavor SCS-1L-1-5 created 2026-02-18 04:48:39.196606 | orchestrator | 2026-02-18 04:48:35 | INFO  | Flavor SCS-1V-2 created 2026-02-18 04:48:39.196618 | orchestrator | 2026-02-18 04:48:35 | INFO  | Flavor SCS-1V-2-5 created 2026-02-18 04:48:39.196633 | orchestrator | 2026-02-18 04:48:35 | INFO  | Flavor SCS-1V-4 created 2026-02-18 04:48:39.196652 | orchestrator | 2026-02-18 04:48:35 | INFO  | Flavor SCS-1V-4-10 created 2026-02-18 04:48:39.196671 | orchestrator | 2026-02-18 04:48:35 | INFO  | Flavor SCS-1V-8 created 2026-02-18 04:48:39.196692 | orchestrator | 2026-02-18 04:48:35 | INFO  | Flavor SCS-1V-8-20 created 2026-02-18 04:48:39.196723 | orchestrator | 2026-02-18 04:48:35 | INFO  | Flavor SCS-2V-4 created 2026-02-18 04:48:39.196735 | orchestrator | 2026-02-18 04:48:36 | INFO  | Flavor SCS-2V-4-10 created 2026-02-18 04:48:39.196747 | orchestrator | 2026-02-18 04:48:36 | INFO  | Flavor SCS-2V-8 created 2026-02-18 04:48:39.196759 | orchestrator | 2026-02-18 04:48:36 | INFO  | Flavor SCS-2V-8-20 created 2026-02-18 04:48:39.196770 | orchestrator | 2026-02-18 04:48:36 | INFO  | Flavor SCS-2V-16 created 2026-02-18 04:48:39.196781 | orchestrator | 2026-02-18 04:48:36 | INFO  | Flavor SCS-2V-16-50 created 2026-02-18 04:48:39.196792 | orchestrator | 2026-02-18 04:48:36 | INFO  | Flavor SCS-4V-8 created 2026-02-18 04:48:39.196803 | orchestrator | 2026-02-18 04:48:36 | INFO  | Flavor SCS-4V-8-20 created 2026-02-18 04:48:39.196814 | orchestrator | 2026-02-18 04:48:37 | INFO  | Flavor SCS-4V-16 created 2026-02-18 04:48:39.196825 | orchestrator | 2026-02-18 04:48:37 | INFO  | Flavor SCS-4V-16-50 created 2026-02-18 04:48:39.196836 | orchestrator | 2026-02-18 04:48:37 | INFO  | Flavor SCS-4V-32 created 2026-02-18 04:48:39.196848 | orchestrator | 2026-02-18 04:48:37 | INFO  | Flavor SCS-4V-32-100 created 2026-02-18 04:48:39.196859 | orchestrator | 2026-02-18 04:48:37 | INFO  | Flavor SCS-8V-16 created 2026-02-18 04:48:39.196870 | orchestrator | 2026-02-18 04:48:37 | INFO  | Flavor SCS-8V-16-50 created 2026-02-18 04:48:39.196882 | orchestrator | 2026-02-18 04:48:37 | INFO  | Flavor SCS-8V-32 created 2026-02-18 04:48:39.196893 | orchestrator | 2026-02-18 04:48:38 | INFO  | Flavor SCS-8V-32-100 created 2026-02-18 04:48:39.196904 | orchestrator | 2026-02-18 04:48:38 | INFO  | Flavor SCS-16V-32 created 2026-02-18 04:48:39.196916 | orchestrator | 2026-02-18 04:48:38 | INFO  | Flavor SCS-16V-32-100 created 2026-02-18 04:48:39.196927 | orchestrator | 2026-02-18 04:48:38 | INFO  | Flavor SCS-2V-4-20s created 2026-02-18 04:48:39.196938 | orchestrator | 2026-02-18 04:48:38 | INFO  | Flavor SCS-4V-8-50s created 2026-02-18 04:48:39.196949 | orchestrator | 2026-02-18 04:48:38 | INFO  | Flavor SCS-8V-32-100s created 2026-02-18 04:48:41.530505 | orchestrator | 2026-02-18 04:48:41 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-18 04:48:51.676516 | orchestrator | 2026-02-18 04:48:51 | INFO  | Task ea0c027b-946c-4816-b1b8-1f9de6e575e2 (bootstrap-basic) was prepared for execution. 2026-02-18 04:48:51.676645 | orchestrator | 2026-02-18 04:48:51 | INFO  | It takes a moment until task ea0c027b-946c-4816-b1b8-1f9de6e575e2 (bootstrap-basic) has been started and output is visible here. 2026-02-18 04:49:35.852853 | orchestrator | 2026-02-18 04:49:35.852974 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-18 04:49:35.852992 | orchestrator | 2026-02-18 04:49:35.853005 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 04:49:35.853017 | orchestrator | Wednesday 18 February 2026 04:48:56 +0000 (0:00:00.075) 0:00:00.075 **** 2026-02-18 04:49:35.853029 | orchestrator | ok: [localhost] 2026-02-18 04:49:35.853042 | orchestrator | 2026-02-18 04:49:35.853053 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-18 04:49:35.853064 | orchestrator | Wednesday 18 February 2026 04:48:58 +0000 (0:00:01.930) 0:00:02.005 **** 2026-02-18 04:49:35.853075 | orchestrator | ok: [localhost] 2026-02-18 04:49:35.853086 | orchestrator | 2026-02-18 04:49:35.853098 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-18 04:49:35.853109 | orchestrator | Wednesday 18 February 2026 04:49:05 +0000 (0:00:07.325) 0:00:09.331 **** 2026-02-18 04:49:35.853120 | orchestrator | changed: [localhost] 2026-02-18 04:49:35.853132 | orchestrator | 2026-02-18 04:49:35.853144 | orchestrator | TASK [Create public network] *************************************************** 2026-02-18 04:49:35.853156 | orchestrator | Wednesday 18 February 2026 04:49:11 +0000 (0:00:06.521) 0:00:15.852 **** 2026-02-18 04:49:35.853168 | orchestrator | changed: [localhost] 2026-02-18 04:49:35.853179 | orchestrator | 2026-02-18 04:49:35.853190 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-18 04:49:35.853201 | orchestrator | Wednesday 18 February 2026 04:49:17 +0000 (0:00:05.234) 0:00:21.087 **** 2026-02-18 04:49:35.853217 | orchestrator | changed: [localhost] 2026-02-18 04:49:35.853229 | orchestrator | 2026-02-18 04:49:35.853240 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-18 04:49:35.853251 | orchestrator | Wednesday 18 February 2026 04:49:23 +0000 (0:00:06.670) 0:00:27.758 **** 2026-02-18 04:49:35.853262 | orchestrator | changed: [localhost] 2026-02-18 04:49:35.853273 | orchestrator | 2026-02-18 04:49:35.853284 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-18 04:49:35.853361 | orchestrator | Wednesday 18 February 2026 04:49:28 +0000 (0:00:04.341) 0:00:32.099 **** 2026-02-18 04:49:35.853378 | orchestrator | changed: [localhost] 2026-02-18 04:49:35.853396 | orchestrator | 2026-02-18 04:49:35.853413 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-18 04:49:35.853437 | orchestrator | Wednesday 18 February 2026 04:49:32 +0000 (0:00:03.873) 0:00:35.972 **** 2026-02-18 04:49:35.853451 | orchestrator | ok: [localhost] 2026-02-18 04:49:35.853464 | orchestrator | 2026-02-18 04:49:35.853477 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:49:35.853490 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 04:49:35.853503 | orchestrator | 2026-02-18 04:49:35.853516 | orchestrator | 2026-02-18 04:49:35.853528 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:49:35.853541 | orchestrator | Wednesday 18 February 2026 04:49:35 +0000 (0:00:03.430) 0:00:39.402 **** 2026-02-18 04:49:35.853554 | orchestrator | =============================================================================== 2026-02-18 04:49:35.853566 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.33s 2026-02-18 04:49:35.853579 | orchestrator | Set public network to default ------------------------------------------- 6.67s 2026-02-18 04:49:35.853591 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.52s 2026-02-18 04:49:35.853604 | orchestrator | Create public network --------------------------------------------------- 5.23s 2026-02-18 04:49:35.853640 | orchestrator | Create public subnet ---------------------------------------------------- 4.34s 2026-02-18 04:49:35.853653 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.87s 2026-02-18 04:49:35.853667 | orchestrator | Create manager role ----------------------------------------------------- 3.43s 2026-02-18 04:49:35.853679 | orchestrator | Gathering Facts --------------------------------------------------------- 1.93s 2026-02-18 04:49:38.541017 | orchestrator | 2026-02-18 04:49:38 | INFO  | It takes a moment until task 3c951f25-5ee7-4530-9401-5ffc3c3e541d (image-manager) has been started and output is visible here. 2026-02-18 04:50:21.341605 | orchestrator | 2026-02-18 04:49:41 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-18 04:50:21.341724 | orchestrator | 2026-02-18 04:49:41 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-18 04:50:21.341742 | orchestrator | 2026-02-18 04:49:41 | INFO  | Importing image Cirros 0.6.2 2026-02-18 04:50:21.341755 | orchestrator | 2026-02-18 04:49:41 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-18 04:50:21.341768 | orchestrator | 2026-02-18 04:49:43 | INFO  | Waiting for image to leave queued state... 2026-02-18 04:50:21.341780 | orchestrator | 2026-02-18 04:49:45 | INFO  | Waiting for import to complete... 2026-02-18 04:50:21.341792 | orchestrator | 2026-02-18 04:49:55 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-18 04:50:21.341804 | orchestrator | 2026-02-18 04:49:56 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-18 04:50:21.341815 | orchestrator | 2026-02-18 04:49:56 | INFO  | Setting internal_version = 0.6.2 2026-02-18 04:50:21.341827 | orchestrator | 2026-02-18 04:49:56 | INFO  | Setting image_original_user = cirros 2026-02-18 04:50:21.341838 | orchestrator | 2026-02-18 04:49:56 | INFO  | Adding tag os:cirros 2026-02-18 04:50:21.341849 | orchestrator | 2026-02-18 04:49:56 | INFO  | Setting property architecture: x86_64 2026-02-18 04:50:21.341860 | orchestrator | 2026-02-18 04:49:56 | INFO  | Setting property hw_disk_bus: scsi 2026-02-18 04:50:21.341871 | orchestrator | 2026-02-18 04:49:57 | INFO  | Setting property hw_rng_model: virtio 2026-02-18 04:50:21.341882 | orchestrator | 2026-02-18 04:49:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-18 04:50:21.341894 | orchestrator | 2026-02-18 04:49:57 | INFO  | Setting property hw_watchdog_action: reset 2026-02-18 04:50:21.341905 | orchestrator | 2026-02-18 04:49:57 | INFO  | Setting property hypervisor_type: qemu 2026-02-18 04:50:21.341916 | orchestrator | 2026-02-18 04:49:58 | INFO  | Setting property os_distro: cirros 2026-02-18 04:50:21.341927 | orchestrator | 2026-02-18 04:49:58 | INFO  | Setting property os_purpose: minimal 2026-02-18 04:50:21.341938 | orchestrator | 2026-02-18 04:49:58 | INFO  | Setting property replace_frequency: never 2026-02-18 04:50:21.341950 | orchestrator | 2026-02-18 04:49:58 | INFO  | Setting property uuid_validity: none 2026-02-18 04:50:21.341960 | orchestrator | 2026-02-18 04:49:59 | INFO  | Setting property provided_until: none 2026-02-18 04:50:21.341971 | orchestrator | 2026-02-18 04:49:59 | INFO  | Setting property image_description: Cirros 2026-02-18 04:50:21.341983 | orchestrator | 2026-02-18 04:49:59 | INFO  | Setting property image_name: Cirros 2026-02-18 04:50:21.341994 | orchestrator | 2026-02-18 04:50:00 | INFO  | Setting property internal_version: 0.6.2 2026-02-18 04:50:21.342005 | orchestrator | 2026-02-18 04:50:00 | INFO  | Setting property image_original_user: cirros 2026-02-18 04:50:21.342106 | orchestrator | 2026-02-18 04:50:00 | INFO  | Setting property os_version: 0.6.2 2026-02-18 04:50:21.342130 | orchestrator | 2026-02-18 04:50:00 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-18 04:50:21.342145 | orchestrator | 2026-02-18 04:50:01 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-18 04:50:21.342158 | orchestrator | 2026-02-18 04:50:01 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-18 04:50:21.342170 | orchestrator | 2026-02-18 04:50:01 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-18 04:50:21.342183 | orchestrator | 2026-02-18 04:50:01 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-18 04:50:21.342195 | orchestrator | 2026-02-18 04:50:01 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-18 04:50:21.342213 | orchestrator | 2026-02-18 04:50:01 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-18 04:50:21.342225 | orchestrator | 2026-02-18 04:50:01 | INFO  | Importing image Cirros 0.6.3 2026-02-18 04:50:21.342270 | orchestrator | 2026-02-18 04:50:01 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-18 04:50:21.342284 | orchestrator | 2026-02-18 04:50:03 | INFO  | Waiting for image to leave queued state... 2026-02-18 04:50:21.342296 | orchestrator | 2026-02-18 04:50:05 | INFO  | Waiting for import to complete... 2026-02-18 04:50:21.342328 | orchestrator | 2026-02-18 04:50:15 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-18 04:50:21.342341 | orchestrator | 2026-02-18 04:50:16 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-18 04:50:21.342353 | orchestrator | 2026-02-18 04:50:16 | INFO  | Setting internal_version = 0.6.3 2026-02-18 04:50:21.342365 | orchestrator | 2026-02-18 04:50:16 | INFO  | Setting image_original_user = cirros 2026-02-18 04:50:21.342377 | orchestrator | 2026-02-18 04:50:16 | INFO  | Adding tag os:cirros 2026-02-18 04:50:21.342389 | orchestrator | 2026-02-18 04:50:16 | INFO  | Setting property architecture: x86_64 2026-02-18 04:50:21.342402 | orchestrator | 2026-02-18 04:50:16 | INFO  | Setting property hw_disk_bus: scsi 2026-02-18 04:50:21.342414 | orchestrator | 2026-02-18 04:50:16 | INFO  | Setting property hw_rng_model: virtio 2026-02-18 04:50:21.342426 | orchestrator | 2026-02-18 04:50:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-18 04:50:21.342439 | orchestrator | 2026-02-18 04:50:17 | INFO  | Setting property hw_watchdog_action: reset 2026-02-18 04:50:21.342451 | orchestrator | 2026-02-18 04:50:17 | INFO  | Setting property hypervisor_type: qemu 2026-02-18 04:50:21.342463 | orchestrator | 2026-02-18 04:50:17 | INFO  | Setting property os_distro: cirros 2026-02-18 04:50:21.342476 | orchestrator | 2026-02-18 04:50:17 | INFO  | Setting property os_purpose: minimal 2026-02-18 04:50:21.342488 | orchestrator | 2026-02-18 04:50:18 | INFO  | Setting property replace_frequency: never 2026-02-18 04:50:21.342501 | orchestrator | 2026-02-18 04:50:18 | INFO  | Setting property uuid_validity: none 2026-02-18 04:50:21.342512 | orchestrator | 2026-02-18 04:50:18 | INFO  | Setting property provided_until: none 2026-02-18 04:50:21.342523 | orchestrator | 2026-02-18 04:50:18 | INFO  | Setting property image_description: Cirros 2026-02-18 04:50:21.342534 | orchestrator | 2026-02-18 04:50:19 | INFO  | Setting property image_name: Cirros 2026-02-18 04:50:21.342545 | orchestrator | 2026-02-18 04:50:19 | INFO  | Setting property internal_version: 0.6.3 2026-02-18 04:50:21.342564 | orchestrator | 2026-02-18 04:50:19 | INFO  | Setting property image_original_user: cirros 2026-02-18 04:50:21.342576 | orchestrator | 2026-02-18 04:50:19 | INFO  | Setting property os_version: 0.6.3 2026-02-18 04:50:21.342587 | orchestrator | 2026-02-18 04:50:19 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-18 04:50:21.342598 | orchestrator | 2026-02-18 04:50:20 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-18 04:50:21.342609 | orchestrator | 2026-02-18 04:50:20 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-18 04:50:21.342620 | orchestrator | 2026-02-18 04:50:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-18 04:50:21.342631 | orchestrator | 2026-02-18 04:50:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-18 04:50:21.655643 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-18 04:50:24.024609 | orchestrator | 2026-02-18 04:50:24 | INFO  | date: 2026-02-18 2026-02-18 04:50:24.024782 | orchestrator | 2026-02-18 04:50:24 | INFO  | image: octavia-amphora-haproxy-2024.2.20260218.qcow2 2026-02-18 04:50:24.024881 | orchestrator | 2026-02-18 04:50:24 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260218.qcow2 2026-02-18 04:50:24.025714 | orchestrator | 2026-02-18 04:50:24 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260218.qcow2.CHECKSUM 2026-02-18 04:50:24.418307 | orchestrator | 2026-02-18 04:50:24 | INFO  | checksum: 836ff4d11baba3b47c7ee1705878a29a9427fc2288a11429e1da9d118dae5e05 2026-02-18 04:50:24.500624 | orchestrator | 2026-02-18 04:50:24 | INFO  | It takes a moment until task 382ca34a-1102-436d-b13e-8fcbafb66708 (image-manager) has been started and output is visible here. 2026-02-18 04:51:56.869155 | orchestrator | 2026-02-18 04:50:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-18' 2026-02-18 04:51:56.869293 | orchestrator | 2026-02-18 04:50:26 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260218.qcow2: 200 2026-02-18 04:51:56.869315 | orchestrator | 2026-02-18 04:50:26 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-18 2026-02-18 04:51:56.869327 | orchestrator | 2026-02-18 04:50:26 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260218.qcow2 2026-02-18 04:51:56.869340 | orchestrator | 2026-02-18 04:50:28 | INFO  | Waiting for image to leave queued state... 2026-02-18 04:51:56.869352 | orchestrator | 2026-02-18 04:50:30 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869364 | orchestrator | 2026-02-18 04:50:40 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869375 | orchestrator | 2026-02-18 04:50:50 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869386 | orchestrator | 2026-02-18 04:51:00 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869399 | orchestrator | 2026-02-18 04:51:10 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869411 | orchestrator | 2026-02-18 04:51:21 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869422 | orchestrator | 2026-02-18 04:51:31 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869433 | orchestrator | 2026-02-18 04:51:41 | INFO  | Waiting for import to complete... 2026-02-18 04:51:56.869444 | orchestrator | 2026-02-18 04:51:51 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-18' successfully completed, reloading images 2026-02-18 04:51:56.869479 | orchestrator | 2026-02-18 04:51:51 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-18' 2026-02-18 04:51:56.869491 | orchestrator | 2026-02-18 04:51:51 | INFO  | Setting internal_version = 2026-02-18 2026-02-18 04:51:56.869502 | orchestrator | 2026-02-18 04:51:51 | INFO  | Setting image_original_user = ubuntu 2026-02-18 04:51:56.869514 | orchestrator | 2026-02-18 04:51:51 | INFO  | Adding tag amphora 2026-02-18 04:51:56.869525 | orchestrator | 2026-02-18 04:51:52 | INFO  | Adding tag os:ubuntu 2026-02-18 04:51:56.869536 | orchestrator | 2026-02-18 04:51:52 | INFO  | Setting property architecture: x86_64 2026-02-18 04:51:56.869547 | orchestrator | 2026-02-18 04:51:52 | INFO  | Setting property hw_disk_bus: scsi 2026-02-18 04:51:56.869557 | orchestrator | 2026-02-18 04:51:52 | INFO  | Setting property hw_rng_model: virtio 2026-02-18 04:51:56.869568 | orchestrator | 2026-02-18 04:51:53 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-18 04:51:56.869580 | orchestrator | 2026-02-18 04:51:53 | INFO  | Setting property hw_watchdog_action: reset 2026-02-18 04:51:56.869591 | orchestrator | 2026-02-18 04:51:53 | INFO  | Setting property hypervisor_type: qemu 2026-02-18 04:51:56.869601 | orchestrator | 2026-02-18 04:51:53 | INFO  | Setting property os_distro: ubuntu 2026-02-18 04:51:56.869612 | orchestrator | 2026-02-18 04:51:53 | INFO  | Setting property replace_frequency: quarterly 2026-02-18 04:51:56.869623 | orchestrator | 2026-02-18 04:51:54 | INFO  | Setting property uuid_validity: last-1 2026-02-18 04:51:56.869634 | orchestrator | 2026-02-18 04:51:54 | INFO  | Setting property provided_until: none 2026-02-18 04:51:56.869661 | orchestrator | 2026-02-18 04:51:54 | INFO  | Setting property os_purpose: network 2026-02-18 04:51:56.869674 | orchestrator | 2026-02-18 04:51:54 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-18 04:51:56.869687 | orchestrator | 2026-02-18 04:51:55 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-18 04:51:56.869699 | orchestrator | 2026-02-18 04:51:55 | INFO  | Setting property internal_version: 2026-02-18 2026-02-18 04:51:56.869712 | orchestrator | 2026-02-18 04:51:55 | INFO  | Setting property image_original_user: ubuntu 2026-02-18 04:51:56.869724 | orchestrator | 2026-02-18 04:51:55 | INFO  | Setting property os_version: 2026-02-18 2026-02-18 04:51:56.869737 | orchestrator | 2026-02-18 04:51:56 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260218.qcow2 2026-02-18 04:51:56.869749 | orchestrator | 2026-02-18 04:51:56 | INFO  | Setting property image_build_date: 2026-02-18 2026-02-18 04:51:56.869779 | orchestrator | 2026-02-18 04:51:56 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-18' 2026-02-18 04:51:56.869792 | orchestrator | 2026-02-18 04:51:56 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-18' 2026-02-18 04:51:56.869804 | orchestrator | 2026-02-18 04:51:56 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-18 04:51:56.869816 | orchestrator | 2026-02-18 04:51:56 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-18 04:51:56.869835 | orchestrator | 2026-02-18 04:51:56 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-18 04:51:56.869855 | orchestrator | 2026-02-18 04:51:56 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-18 04:51:57.478002 | orchestrator | ok: Runtime: 0:03:28.115216 2026-02-18 04:51:57.498363 | 2026-02-18 04:51:57.498524 | TASK [Run checks] 2026-02-18 04:51:58.235870 | orchestrator | + set -e 2026-02-18 04:51:58.236056 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 04:51:58.236078 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 04:51:58.236099 | orchestrator | ++ INTERACTIVE=false 2026-02-18 04:51:58.236112 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 04:51:58.236166 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 04:51:58.236182 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-18 04:51:58.236783 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-18 04:51:58.242663 | orchestrator | 2026-02-18 04:51:58.242715 | orchestrator | # CHECK 2026-02-18 04:51:58.242727 | orchestrator | 2026-02-18 04:51:58.242739 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 04:51:58.242755 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 04:51:58.242767 | orchestrator | + echo 2026-02-18 04:51:58.242778 | orchestrator | + echo '# CHECK' 2026-02-18 04:51:58.242788 | orchestrator | + echo 2026-02-18 04:51:58.242803 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-18 04:51:58.243380 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-18 04:51:58.310676 | orchestrator | 2026-02-18 04:51:58.310772 | orchestrator | ## Containers @ testbed-manager 2026-02-18 04:51:58.310787 | orchestrator | 2026-02-18 04:51:58.310802 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-18 04:51:58.310814 | orchestrator | + echo 2026-02-18 04:51:58.310825 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-18 04:51:58.310837 | orchestrator | + echo 2026-02-18 04:51:58.310849 | orchestrator | + osism container testbed-manager ps 2026-02-18 04:52:00.319639 | orchestrator | 2026-02-18 04:52:00 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-18 04:52:00.701486 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-18 04:52:00.701587 | orchestrator | 5cb5543b40ec registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-18 04:52:00.701605 | orchestrator | 180a7e5872c8 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-18 04:52:00.701614 | orchestrator | 95ed1d4e08e2 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-18 04:52:00.701623 | orchestrator | c8c3fe28f585 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-18 04:52:00.701631 | orchestrator | 6e5841f2b367 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-18 04:52:00.701643 | orchestrator | 6e70aecba4e3 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 58 minutes cephclient 2026-02-18 04:52:00.701651 | orchestrator | 581ff0bbc8bc registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-18 04:52:00.701658 | orchestrator | 89dd0f438342 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-18 04:52:00.701688 | orchestrator | e9e1ab149eda registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-18 04:52:00.701696 | orchestrator | 7976e9e4408d registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-18 04:52:00.701704 | orchestrator | cd6a14f25c9f phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-18 04:52:00.701711 | orchestrator | bddf1235e396 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-18 04:52:00.701719 | orchestrator | a75e07616ec7 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-18 04:52:00.701727 | orchestrator | 36ae20694467 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-18 04:52:00.701749 | orchestrator | 1975bff0aeab registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-18 04:52:00.701765 | orchestrator | 5bf4fb587dde registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-18 04:52:00.701773 | orchestrator | fa035746a572 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-18 04:52:00.701780 | orchestrator | 92f64e3a70c6 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-18 04:52:00.701788 | orchestrator | fa9c7e8c1408 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-18 04:52:00.701795 | orchestrator | d33070d2d1c9 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-18 04:52:00.701803 | orchestrator | 9d591fda445b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-18 04:52:00.701811 | orchestrator | 2f26377bf046 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-18 04:52:00.701824 | orchestrator | 951796ee2c98 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-18 04:52:00.701832 | orchestrator | aa00b365d26d registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-18 04:52:00.701840 | orchestrator | 19795bb48f4d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-18 04:52:00.701847 | orchestrator | ca24b62949b6 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-18 04:52:00.701855 | orchestrator | f864f7ccc279 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-18 04:52:00.701862 | orchestrator | 4af8f4b55630 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-18 04:52:00.701870 | orchestrator | acecc5932361 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-18 04:52:00.701881 | orchestrator | 671decadd119 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-18 04:52:01.010146 | orchestrator | 2026-02-18 04:52:01.010250 | orchestrator | ## Images @ testbed-manager 2026-02-18 04:52:01.010263 | orchestrator | 2026-02-18 04:52:01.010272 | orchestrator | + echo 2026-02-18 04:52:01.010281 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-18 04:52:01.010289 | orchestrator | + echo 2026-02-18 04:52:01.010301 | orchestrator | + osism container testbed-manager images 2026-02-18 04:52:03.402314 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-18 04:52:03.402427 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 3e5b5f52e5b8 25 hours ago 239MB 2026-02-18 04:52:03.402443 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 3 weeks ago 41.4MB 2026-02-18 04:52:03.402455 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-18 04:52:03.402466 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-18 04:52:03.402477 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-18 04:52:03.402488 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-18 04:52:03.402499 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-18 04:52:03.402513 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-18 04:52:03.402524 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-18 04:52:03.402560 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-18 04:52:03.402571 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-18 04:52:03.402582 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-18 04:52:03.402593 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-18 04:52:03.402604 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-18 04:52:03.402615 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-18 04:52:03.402626 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-18 04:52:03.402636 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-18 04:52:03.402647 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-18 04:52:03.402658 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-18 04:52:03.402669 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-18 04:52:03.402680 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-18 04:52:03.402691 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-18 04:52:03.402701 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-18 04:52:03.402712 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-18 04:52:03.402723 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-18 04:52:03.721926 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-18 04:52:03.722203 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-18 04:52:03.779110 | orchestrator | 2026-02-18 04:52:03.779240 | orchestrator | ## Containers @ testbed-node-0 2026-02-18 04:52:03.779256 | orchestrator | 2026-02-18 04:52:03.779267 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-18 04:52:03.779279 | orchestrator | + echo 2026-02-18 04:52:03.779291 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-18 04:52:03.779302 | orchestrator | + echo 2026-02-18 04:52:03.779313 | orchestrator | + osism container testbed-node-0 ps 2026-02-18 04:52:06.238312 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-18 04:52:06.238431 | orchestrator | 88e3275305b1 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-18 04:52:06.238472 | orchestrator | a0b081c7ab19 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-18 04:52:06.238486 | orchestrator | 0fe4c919fd29 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-18 04:52:06.238497 | orchestrator | a5c1303b415a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-18 04:52:06.238531 | orchestrator | 41f333272335 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-18 04:52:06.238544 | orchestrator | 461b47cecc18 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-18 04:52:06.238562 | orchestrator | dcd1fb33a739 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-18 04:52:06.238589 | orchestrator | 4e27b8c01284 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-18 04:52:06.238601 | orchestrator | 2e61a6298c1e registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-18 04:52:06.238613 | orchestrator | 9f3cd4c86bb6 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-18 04:52:06.238624 | orchestrator | 1d86886d66a4 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-18 04:52:06.238635 | orchestrator | 2d5b9010b82b registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-18 04:52:06.238646 | orchestrator | c597f746f16a registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-18 04:52:06.238657 | orchestrator | 61bc4e738aba registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-18 04:52:06.238668 | orchestrator | 9b589d9fcc57 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-18 04:52:06.238678 | orchestrator | bdead009f4c5 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-18 04:52:06.238689 | orchestrator | 3e51c4fa15f2 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-18 04:52:06.238700 | orchestrator | a14c323f2fdd registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-18 04:52:06.238711 | orchestrator | c82ce4b903af registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-18 04:52:06.238727 | orchestrator | 6d3f5c02640c registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-18 04:52:06.238799 | orchestrator | 44f9f4a128bb registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-18 04:52:06.238843 | orchestrator | ec32f313b9d6 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 21 minutes octavia_driver_agent 2026-02-18 04:52:06.238864 | orchestrator | f75be516e607 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-18 04:52:06.238876 | orchestrator | 85c5b4797fa6 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-18 04:52:06.238887 | orchestrator | f11c8e377f03 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-18 04:52:06.238905 | orchestrator | c34b302e1be1 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-18 04:52:06.238924 | orchestrator | cbd2faa11983 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-18 04:52:06.238944 | orchestrator | 529b5420997f registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-18 04:52:06.238975 | orchestrator | 6beb5c7b5816 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-18 04:52:06.238990 | orchestrator | 079e5f73dd7a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-18 04:52:06.239002 | orchestrator | fff67d117cf9 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-18 04:52:06.239013 | orchestrator | d155ee5e3217 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-18 04:52:06.239024 | orchestrator | 60e34678c661 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-18 04:52:06.239035 | orchestrator | 470ffc7683fa registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-18 04:52:06.239046 | orchestrator | 3f815a1be9ac registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-18 04:52:06.239057 | orchestrator | 873b8070974b registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-18 04:52:06.239068 | orchestrator | 9831eb3a5f24 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-18 04:52:06.239078 | orchestrator | cc44870858ae registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-18 04:52:06.239089 | orchestrator | a5b62a5f919e registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-18 04:52:06.239100 | orchestrator | 6794d8c7ebe8 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-18 04:52:06.239152 | orchestrator | 7413480f8894 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-18 04:52:06.239174 | orchestrator | 9e531a0b86cf registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-18 04:52:06.239202 | orchestrator | ef5fa9431418 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-18 04:52:06.239215 | orchestrator | c4c70ee4e532 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-18 04:52:06.239226 | orchestrator | 87953fbb2969 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-18 04:52:06.239237 | orchestrator | 6359ba8714e6 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-18 04:52:06.239248 | orchestrator | 2c9f404fe561 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-18 04:52:06.239259 | orchestrator | 8c35727b6439 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-18 04:52:06.239280 | orchestrator | 239c0a75405e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-18 04:52:06.239292 | orchestrator | 075c7ea25354 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-0 2026-02-18 04:52:06.239303 | orchestrator | 9789d3c46cc1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-18 04:52:06.239314 | orchestrator | 90866ac7d579 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-18 04:52:06.239325 | orchestrator | 1d1f22ff564c registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-18 04:52:06.239336 | orchestrator | 3c5334d25336 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-18 04:52:06.239347 | orchestrator | 58ebc639e312 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-18 04:52:06.239358 | orchestrator | 96b74214781b registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-18 04:52:06.239374 | orchestrator | 490aa451f1dd registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-18 04:52:06.239386 | orchestrator | 08eb5f476729 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-18 04:52:06.239403 | orchestrator | 2b0af373640d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-18 04:52:06.239414 | orchestrator | df29edd59b0a registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-18 04:52:06.239426 | orchestrator | 2cff4920d5a3 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-18 04:52:06.239437 | orchestrator | fadd35430f84 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-18 04:52:06.239448 | orchestrator | 10c77552ca46 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-18 04:52:06.239459 | orchestrator | 2e9f96d6a280 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-18 04:52:06.239470 | orchestrator | ee61548b842f registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-18 04:52:06.239481 | orchestrator | 3ffd4c409a74 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-18 04:52:06.239492 | orchestrator | d141fa4ea8b9 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-18 04:52:06.239503 | orchestrator | dd777600389e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-18 04:52:06.239528 | orchestrator | ff33677e2e6d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-18 04:52:06.239545 | orchestrator | bd276d218071 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-18 04:52:06.239556 | orchestrator | 67efd87664bb registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-18 04:52:06.549814 | orchestrator | 2026-02-18 04:52:06.549945 | orchestrator | ## Images @ testbed-node-0 2026-02-18 04:52:06.549964 | orchestrator | 2026-02-18 04:52:06.549977 | orchestrator | + echo 2026-02-18 04:52:06.549989 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-18 04:52:06.550001 | orchestrator | + echo 2026-02-18 04:52:06.550012 | orchestrator | + osism container testbed-node-0 images 2026-02-18 04:52:09.001087 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-18 04:52:09.001240 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-18 04:52:09.001257 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-18 04:52:09.001269 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-18 04:52:09.001280 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-18 04:52:09.001314 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-18 04:52:09.001326 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-18 04:52:09.001337 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-18 04:52:09.001348 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-18 04:52:09.001359 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-18 04:52:09.001370 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-18 04:52:09.001381 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-18 04:52:09.001392 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-18 04:52:09.001403 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-18 04:52:09.001414 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-18 04:52:09.001425 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-18 04:52:09.001436 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-18 04:52:09.001446 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-18 04:52:09.001457 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-18 04:52:09.001468 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-18 04:52:09.001479 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-18 04:52:09.001490 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-18 04:52:09.001501 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-18 04:52:09.001512 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-18 04:52:09.001523 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-18 04:52:09.001533 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-18 04:52:09.001544 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-18 04:52:09.001556 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-18 04:52:09.001571 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-18 04:52:09.001583 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-18 04:52:09.001594 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-18 04:52:09.001612 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-18 04:52:09.001644 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-18 04:52:09.001658 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-18 04:52:09.001671 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-18 04:52:09.001684 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-18 04:52:09.001695 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-18 04:52:09.001706 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-18 04:52:09.001716 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-18 04:52:09.001727 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-18 04:52:09.001738 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-18 04:52:09.001749 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-18 04:52:09.001760 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-18 04:52:09.001771 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-18 04:52:09.001782 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-18 04:52:09.001793 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-18 04:52:09.001804 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-18 04:52:09.001815 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-18 04:52:09.001826 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-18 04:52:09.001837 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-18 04:52:09.001848 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-18 04:52:09.001859 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-18 04:52:09.001870 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-18 04:52:09.001881 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-18 04:52:09.001892 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-18 04:52:09.001903 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-18 04:52:09.001913 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-18 04:52:09.001931 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-18 04:52:09.001942 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-18 04:52:09.001958 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-18 04:52:09.001969 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-18 04:52:09.001980 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-18 04:52:09.001990 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-18 04:52:09.002001 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-18 04:52:09.002076 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-18 04:52:09.002091 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-18 04:52:09.002102 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-18 04:52:09.002149 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-18 04:52:09.002161 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-18 04:52:09.002172 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-18 04:52:09.358373 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-18 04:52:09.358680 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-18 04:52:09.419835 | orchestrator | 2026-02-18 04:52:09.419929 | orchestrator | ## Containers @ testbed-node-1 2026-02-18 04:52:09.419947 | orchestrator | 2026-02-18 04:52:09.419959 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-18 04:52:09.419970 | orchestrator | + echo 2026-02-18 04:52:09.419982 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-18 04:52:09.419993 | orchestrator | + echo 2026-02-18 04:52:09.420005 | orchestrator | + osism container testbed-node-1 ps 2026-02-18 04:52:11.861355 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-18 04:52:11.861456 | orchestrator | f1ec0a84c31a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-18 04:52:11.861472 | orchestrator | b849ea167d38 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-18 04:52:11.861484 | orchestrator | 891c6a7686d3 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-18 04:52:11.861496 | orchestrator | 8242eea2b7e2 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-18 04:52:11.861508 | orchestrator | c7473bf0c525 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-18 04:52:11.861520 | orchestrator | 597e8a045a24 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-18 04:52:11.861636 | orchestrator | 6209b002842d registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-18 04:52:11.861665 | orchestrator | 1bd73df39092 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-18 04:52:11.861685 | orchestrator | 4936ec225f6b registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-18 04:52:11.861702 | orchestrator | d0ac056fc827 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-18 04:52:11.861713 | orchestrator | b15a17bcfb5a registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-18 04:52:11.861724 | orchestrator | db3662fdebd0 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-18 04:52:11.861757 | orchestrator | 463865e25b43 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-18 04:52:11.861777 | orchestrator | 030ded23f0e7 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-18 04:52:11.861795 | orchestrator | b4c91fa2c468 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-18 04:52:11.861807 | orchestrator | cd6e33cd371d registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-18 04:52:11.861818 | orchestrator | a05de8350d98 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-18 04:52:11.861829 | orchestrator | ba2cf6e3b701 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-18 04:52:11.861840 | orchestrator | 5cc9abd2e147 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-18 04:52:11.861870 | orchestrator | 98750771476d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-18 04:52:11.861882 | orchestrator | c5cf2d578a7c registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-18 04:52:11.861893 | orchestrator | dfcecc61ce3d registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-18 04:52:11.861904 | orchestrator | 077b7b4a12a7 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-18 04:52:11.861916 | orchestrator | 093ee0029535 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-18 04:52:11.862714 | orchestrator | 229daf7cabff registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-18 04:52:11.862748 | orchestrator | de64811f80a5 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-18 04:52:11.862759 | orchestrator | 34638049be15 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-18 04:52:11.862770 | orchestrator | 2f92f04d5e85 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-18 04:52:11.862781 | orchestrator | 459b864eff87 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-18 04:52:11.862792 | orchestrator | 989285ff39d4 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-18 04:52:11.862803 | orchestrator | 49039a646b3e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-18 04:52:11.862815 | orchestrator | a9b81ca9fb93 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-18 04:52:11.862826 | orchestrator | ef480368c0ca registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-18 04:52:11.862836 | orchestrator | 0a65dbec3ba6 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-18 04:52:11.862847 | orchestrator | d63b575f85b6 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-18 04:52:11.862858 | orchestrator | 8da853970e93 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-18 04:52:11.862875 | orchestrator | 53eeb19ddc91 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-18 04:52:11.862887 | orchestrator | f6a436d6a01b registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-18 04:52:11.862897 | orchestrator | 9836b49e692b registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-18 04:52:11.862908 | orchestrator | e4da6b771d3e registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-18 04:52:11.862919 | orchestrator | 66f28edc137a registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-18 04:52:11.862940 | orchestrator | 96266a96cb22 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-18 04:52:11.862951 | orchestrator | 214f8166ea92 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-18 04:52:11.862962 | orchestrator | 8773c93ac653 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-18 04:52:11.862982 | orchestrator | d25efef68b6b registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-18 04:52:11.862993 | orchestrator | bbe367ed6dc2 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-18 04:52:11.863003 | orchestrator | 6fbfe3071bdd registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-18 04:52:11.863014 | orchestrator | cf0f65081af5 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-18 04:52:11.863025 | orchestrator | 7983f17627fa registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-18 04:52:11.863036 | orchestrator | 436bebb44335 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-1 2026-02-18 04:52:11.863047 | orchestrator | ef0aca9cd3b2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-18 04:52:11.863058 | orchestrator | 4c84206aa4db registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-18 04:52:11.863069 | orchestrator | 27c500a72307 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-18 04:52:11.863080 | orchestrator | bf5a0fcdd28c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-18 04:52:11.863090 | orchestrator | a1e198f3e3ca registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-18 04:52:11.863101 | orchestrator | f6f8756dbf28 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-18 04:52:11.863172 | orchestrator | 347926254fe8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-18 04:52:11.863185 | orchestrator | e1580df2739d registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-18 04:52:11.863196 | orchestrator | 8a3d0cc17c32 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-18 04:52:11.863213 | orchestrator | 3e7ff52fa706 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-18 04:52:11.863223 | orchestrator | 080562aab46e registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-18 04:52:11.863234 | orchestrator | e439974c48e7 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-18 04:52:11.863245 | orchestrator | 80d6c690e8e7 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-18 04:52:11.863255 | orchestrator | f61697a86b14 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-18 04:52:11.863271 | orchestrator | 32036eaf5d9f registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-18 04:52:11.863289 | orchestrator | 9154315b20cd registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-18 04:52:11.863300 | orchestrator | 2374e2239f18 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-18 04:52:11.863313 | orchestrator | 43f88ba61bef registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-18 04:52:11.863326 | orchestrator | 2cb3cae4a601 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-18 04:52:11.863343 | orchestrator | 387a39a32c56 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-18 04:52:11.863357 | orchestrator | a8d57a4ba40e registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-18 04:52:12.174202 | orchestrator | 2026-02-18 04:52:12.174290 | orchestrator | ## Images @ testbed-node-1 2026-02-18 04:52:12.174301 | orchestrator | 2026-02-18 04:52:12.174311 | orchestrator | + echo 2026-02-18 04:52:12.174319 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-18 04:52:12.174328 | orchestrator | + echo 2026-02-18 04:52:12.174336 | orchestrator | + osism container testbed-node-1 images 2026-02-18 04:52:14.612537 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-18 04:52:14.612687 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-18 04:52:14.612707 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-18 04:52:14.612720 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-18 04:52:14.612733 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-18 04:52:14.612745 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-18 04:52:14.612756 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-18 04:52:14.612792 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-18 04:52:14.612804 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-18 04:52:14.612816 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-18 04:52:14.612827 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-18 04:52:14.612839 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-18 04:52:14.612850 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-18 04:52:14.612862 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-18 04:52:14.612873 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-18 04:52:14.612885 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-18 04:52:14.612896 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-18 04:52:14.612908 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-18 04:52:14.612919 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-18 04:52:14.612930 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-18 04:52:14.612942 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-18 04:52:14.612953 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-18 04:52:14.612965 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-18 04:52:14.612977 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-18 04:52:14.612988 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-18 04:52:14.613000 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-18 04:52:14.613011 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-18 04:52:14.613023 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-18 04:52:14.613034 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-18 04:52:14.613046 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-18 04:52:14.613057 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-18 04:52:14.613069 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-18 04:52:14.613101 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-18 04:52:14.613161 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-18 04:52:14.613174 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-18 04:52:14.613187 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-18 04:52:14.613199 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-18 04:52:14.613212 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-18 04:52:14.613241 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-18 04:52:14.613254 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-18 04:52:14.613266 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-18 04:52:14.613278 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-18 04:52:14.613290 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-18 04:52:14.613302 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-18 04:52:14.613314 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-18 04:52:14.613326 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-18 04:52:14.613339 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-18 04:52:14.613350 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-18 04:52:14.613360 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-18 04:52:14.613371 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-18 04:52:14.613381 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-18 04:52:14.613392 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-18 04:52:14.613403 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-18 04:52:14.613413 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-18 04:52:14.613424 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-18 04:52:14.613434 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-18 04:52:14.613445 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-18 04:52:14.613456 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-18 04:52:14.613467 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-18 04:52:14.613477 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-18 04:52:14.613495 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-18 04:52:14.613587 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-18 04:52:14.613602 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-18 04:52:14.613613 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-18 04:52:14.613624 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-18 04:52:14.613634 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-18 04:52:14.613645 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-18 04:52:14.613655 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-18 04:52:14.613666 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-18 04:52:14.613677 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-18 04:52:14.919807 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-18 04:52:14.920465 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-18 04:52:14.979724 | orchestrator | 2026-02-18 04:52:14.979825 | orchestrator | ## Containers @ testbed-node-2 2026-02-18 04:52:14.979840 | orchestrator | 2026-02-18 04:52:14.979852 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-18 04:52:14.979863 | orchestrator | + echo 2026-02-18 04:52:14.979874 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-18 04:52:14.979886 | orchestrator | + echo 2026-02-18 04:52:14.979897 | orchestrator | + osism container testbed-node-2 ps 2026-02-18 04:52:17.361742 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-18 04:52:17.361846 | orchestrator | 71ab834fd919 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-18 04:52:17.361863 | orchestrator | 373a4995842d registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-18 04:52:17.361875 | orchestrator | 852ebed5f635 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 6 minutes grafana 2026-02-18 04:52:17.361886 | orchestrator | 9179f31e1763 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-18 04:52:17.361899 | orchestrator | 8f2bfa4dd7e4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-18 04:52:17.361911 | orchestrator | 90d8fe3ca335 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-18 04:52:17.361923 | orchestrator | c8ea6efd758f registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-18 04:52:17.361936 | orchestrator | 9ccd6f15f5c5 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-18 04:52:17.361969 | orchestrator | 6e51727ff048 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-18 04:52:17.361981 | orchestrator | 7daf7a373d9b registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-18 04:52:17.361993 | orchestrator | 0f46f577e652 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-18 04:52:17.362004 | orchestrator | 45ae1ca9ba6f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-18 04:52:17.362097 | orchestrator | 5326f3427494 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-18 04:52:17.362215 | orchestrator | e244050ccbf7 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-18 04:52:17.362227 | orchestrator | c7ad184cfef4 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-18 04:52:17.362238 | orchestrator | 40cd9526bc18 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-18 04:52:17.362249 | orchestrator | f05efe9b30ad registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-18 04:52:17.362260 | orchestrator | 34f8a4ddc181 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-18 04:52:17.362285 | orchestrator | 23030ff63ad1 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-18 04:52:17.362318 | orchestrator | fe285bd554c9 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-18 04:52:17.362332 | orchestrator | 35074c2a576b registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-18 04:52:17.362344 | orchestrator | c9a8bdac87af registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-18 04:52:17.362356 | orchestrator | 63a308f240b8 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-18 04:52:17.362369 | orchestrator | 7eac2a6a4552 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-18 04:52:17.362380 | orchestrator | c7176b2d3a53 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-18 04:52:17.362402 | orchestrator | 1774685ae2ba registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-18 04:52:17.362415 | orchestrator | 0fa3522fcf78 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-18 04:52:17.362427 | orchestrator | 64cca70d033a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-18 04:52:17.362440 | orchestrator | e6606640649d registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-18 04:52:17.362454 | orchestrator | d17dabf8fcb8 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-18 04:52:17.362472 | orchestrator | 6f587318891e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-18 04:52:17.362491 | orchestrator | df2a8fe3bf51 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-18 04:52:17.362515 | orchestrator | 161e7aaac3d3 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-18 04:52:17.362542 | orchestrator | cc888439ac5e registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-18 04:52:17.362560 | orchestrator | 4af4cd635448 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-18 04:52:17.362578 | orchestrator | 6d7f674a7963 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-18 04:52:17.362596 | orchestrator | 13686da07d76 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) glance_api 2026-02-18 04:52:17.362614 | orchestrator | 09763045283f registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-18 04:52:17.362632 | orchestrator | cefd24b775b4 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-18 04:52:17.362662 | orchestrator | cce54a55ade6 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-18 04:52:17.362681 | orchestrator | 8287cbdbbd33 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-18 04:52:17.362700 | orchestrator | 9413a2cf479c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-18 04:52:17.362718 | orchestrator | 3350ab19bbde registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-18 04:52:17.362746 | orchestrator | 5456fc13f7ec registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-18 04:52:17.362758 | orchestrator | e76adb153ed7 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-18 04:52:17.362769 | orchestrator | 795041e75c70 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-18 04:52:17.362779 | orchestrator | 9cfbee1bcdd7 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-18 04:52:17.362790 | orchestrator | bd09260d5e8b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-18 04:52:17.362801 | orchestrator | 69f710201b87 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-18 04:52:17.362812 | orchestrator | f86b22c3ef82 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-2 2026-02-18 04:52:17.362822 | orchestrator | 46dde53cc56c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-18 04:52:17.362841 | orchestrator | 11fb53bc1513 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-18 04:52:17.362852 | orchestrator | 5343e5800978 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-18 04:52:17.362868 | orchestrator | cadf31fd3a8e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-18 04:52:17.362887 | orchestrator | 86f80cdfe899 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-18 04:52:17.362904 | orchestrator | 7c6c2fae64aa registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-18 04:52:17.362920 | orchestrator | 9b8a5c603a02 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-18 04:52:17.363069 | orchestrator | 5dae1832d0f8 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-18 04:52:17.363087 | orchestrator | 8784dc433ff5 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-18 04:52:17.363099 | orchestrator | 0b6d2c2521bc registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-18 04:52:17.363157 | orchestrator | 2307a5008023 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-18 04:52:17.363179 | orchestrator | 92aa8ca34822 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-18 04:52:17.363190 | orchestrator | 4f651b5313d0 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-18 04:52:17.363201 | orchestrator | f28fc07bae81 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-18 04:52:17.363211 | orchestrator | d5353358b5ad registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-18 04:52:17.363222 | orchestrator | bd8ffde43775 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-18 04:52:17.363233 | orchestrator | 3e19adf825ea registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-18 04:52:17.363244 | orchestrator | 0d929e29c59f registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-18 04:52:17.363255 | orchestrator | 2b9798ef63df registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-18 04:52:17.363266 | orchestrator | 11084807d88a registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-18 04:52:17.363277 | orchestrator | 0cbfbf27efa6 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-18 04:52:17.672324 | orchestrator | 2026-02-18 04:52:17.672420 | orchestrator | ## Images @ testbed-node-2 2026-02-18 04:52:17.672437 | orchestrator | 2026-02-18 04:52:17.672451 | orchestrator | + echo 2026-02-18 04:52:17.672465 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-18 04:52:17.672481 | orchestrator | + echo 2026-02-18 04:52:17.672491 | orchestrator | + osism container testbed-node-2 images 2026-02-18 04:52:20.128701 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-18 04:52:20.128810 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-18 04:52:20.128825 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-18 04:52:20.128837 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-18 04:52:20.128865 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-18 04:52:20.128876 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-18 04:52:20.128887 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-18 04:52:20.128898 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-18 04:52:20.128909 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-18 04:52:20.128939 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-18 04:52:20.128950 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-18 04:52:20.128965 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-18 04:52:20.128976 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-18 04:52:20.128988 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-18 04:52:20.128999 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-18 04:52:20.129010 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-18 04:52:20.129021 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-18 04:52:20.129031 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-18 04:52:20.129042 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-18 04:52:20.129053 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-18 04:52:20.129064 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-18 04:52:20.129075 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-18 04:52:20.129085 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-18 04:52:20.129096 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-18 04:52:20.129154 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-18 04:52:20.129165 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-18 04:52:20.129176 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-18 04:52:20.129187 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-18 04:52:20.129198 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-18 04:52:20.129209 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-18 04:52:20.129219 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-18 04:52:20.129230 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-18 04:52:20.129259 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-18 04:52:20.129271 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-18 04:52:20.129282 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-18 04:52:20.129293 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-18 04:52:20.129312 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-18 04:52:20.129323 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-18 04:52:20.129334 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-18 04:52:20.129353 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-18 04:52:20.129364 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-18 04:52:20.129375 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-18 04:52:20.129386 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-18 04:52:20.129397 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-18 04:52:20.129407 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-18 04:52:20.129418 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-18 04:52:20.129429 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-18 04:52:20.129440 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-18 04:52:20.129450 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-18 04:52:20.129461 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-18 04:52:20.129472 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-18 04:52:20.129483 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-18 04:52:20.129493 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-18 04:52:20.129504 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-18 04:52:20.129515 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-18 04:52:20.129525 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-18 04:52:20.129536 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-18 04:52:20.129547 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-18 04:52:20.129558 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-18 04:52:20.129569 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-18 04:52:20.129580 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-18 04:52:20.129591 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-18 04:52:20.129608 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-18 04:52:20.129618 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-18 04:52:20.129637 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-18 04:52:20.129648 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-18 04:52:20.129659 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-18 04:52:20.129670 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-18 04:52:20.129686 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-18 04:52:20.129697 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-18 04:52:20.457975 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-18 04:52:20.465348 | orchestrator | + set -e 2026-02-18 04:52:20.465429 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 04:52:20.465445 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 04:52:20.465457 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 04:52:20.465468 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 04:52:20.465479 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 04:52:20.465490 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 04:52:20.465501 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 04:52:20.465512 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 04:52:20.465523 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 04:52:20.465534 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 04:52:20.465545 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 04:52:20.465555 | orchestrator | ++ export ARA=false 2026-02-18 04:52:20.465566 | orchestrator | ++ ARA=false 2026-02-18 04:52:20.465577 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 04:52:20.465588 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 04:52:20.465598 | orchestrator | ++ export TEMPEST=false 2026-02-18 04:52:20.465609 | orchestrator | ++ TEMPEST=false 2026-02-18 04:52:20.465620 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 04:52:20.465636 | orchestrator | ++ IS_ZUUL=true 2026-02-18 04:52:20.465656 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 04:52:20.465676 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 04:52:20.465691 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 04:52:20.465702 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 04:52:20.465713 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 04:52:20.465724 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 04:52:20.465735 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 04:52:20.465746 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 04:52:20.465757 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 04:52:20.465767 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 04:52:20.465778 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-18 04:52:20.465789 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-18 04:52:20.472467 | orchestrator | + set -e 2026-02-18 04:52:20.472531 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 04:52:20.472544 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 04:52:20.472555 | orchestrator | ++ INTERACTIVE=false 2026-02-18 04:52:20.472565 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 04:52:20.472574 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 04:52:20.472584 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-18 04:52:20.473152 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-18 04:52:20.476902 | orchestrator | 2026-02-18 04:52:20.476962 | orchestrator | # Ceph status 2026-02-18 04:52:20.476976 | orchestrator | 2026-02-18 04:52:20.476987 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 04:52:20.476999 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 04:52:20.477010 | orchestrator | + echo 2026-02-18 04:52:20.477021 | orchestrator | + echo '# Ceph status' 2026-02-18 04:52:20.477056 | orchestrator | + echo 2026-02-18 04:52:20.477068 | orchestrator | + ceph -s 2026-02-18 04:52:21.040691 | orchestrator | cluster: 2026-02-18 04:52:21.040818 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-18 04:52:21.040834 | orchestrator | health: HEALTH_OK 2026-02-18 04:52:21.040845 | orchestrator | 2026-02-18 04:52:21.040855 | orchestrator | services: 2026-02-18 04:52:21.040865 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 69m) 2026-02-18 04:52:21.040889 | orchestrator | mgr: testbed-node-2(active, since 56m), standbys: testbed-node-1, testbed-node-0 2026-02-18 04:52:21.040901 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-18 04:52:21.040911 | orchestrator | osd: 6 osds: 6 up (since 65m), 6 in (since 66m) 2026-02-18 04:52:21.040921 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-18 04:52:21.040931 | orchestrator | 2026-02-18 04:52:21.040941 | orchestrator | data: 2026-02-18 04:52:21.040951 | orchestrator | volumes: 1/1 healthy 2026-02-18 04:52:21.040961 | orchestrator | pools: 14 pools, 401 pgs 2026-02-18 04:52:21.040971 | orchestrator | objects: 555 objects, 2.2 GiB 2026-02-18 04:52:21.040981 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-18 04:52:21.040991 | orchestrator | pgs: 401 active+clean 2026-02-18 04:52:21.041001 | orchestrator | 2026-02-18 04:52:21.086296 | orchestrator | 2026-02-18 04:52:21.086391 | orchestrator | # Ceph versions 2026-02-18 04:52:21.086406 | orchestrator | 2026-02-18 04:52:21.086418 | orchestrator | + echo 2026-02-18 04:52:21.086428 | orchestrator | + echo '# Ceph versions' 2026-02-18 04:52:21.086439 | orchestrator | + echo 2026-02-18 04:52:21.086449 | orchestrator | + ceph versions 2026-02-18 04:52:21.693229 | orchestrator | { 2026-02-18 04:52:21.693331 | orchestrator | "mon": { 2026-02-18 04:52:21.693347 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-18 04:52:21.693360 | orchestrator | }, 2026-02-18 04:52:21.693372 | orchestrator | "mgr": { 2026-02-18 04:52:21.693383 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-18 04:52:21.693394 | orchestrator | }, 2026-02-18 04:52:21.693405 | orchestrator | "osd": { 2026-02-18 04:52:21.693416 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-18 04:52:21.693427 | orchestrator | }, 2026-02-18 04:52:21.693438 | orchestrator | "mds": { 2026-02-18 04:52:21.693449 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-18 04:52:21.693460 | orchestrator | }, 2026-02-18 04:52:21.693471 | orchestrator | "rgw": { 2026-02-18 04:52:21.693494 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-18 04:52:21.693505 | orchestrator | }, 2026-02-18 04:52:21.693516 | orchestrator | "overall": { 2026-02-18 04:52:21.693527 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-18 04:52:21.693539 | orchestrator | } 2026-02-18 04:52:21.693550 | orchestrator | } 2026-02-18 04:52:21.735686 | orchestrator | 2026-02-18 04:52:21.735784 | orchestrator | # Ceph OSD tree 2026-02-18 04:52:21.735799 | orchestrator | 2026-02-18 04:52:21.735811 | orchestrator | + echo 2026-02-18 04:52:21.735823 | orchestrator | + echo '# Ceph OSD tree' 2026-02-18 04:52:21.735834 | orchestrator | + echo 2026-02-18 04:52:21.735845 | orchestrator | + ceph osd df tree 2026-02-18 04:52:22.235737 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-18 04:52:22.235849 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 390 MiB 113 GiB 5.88 1.00 - root default 2026-02-18 04:52:22.235865 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-18 04:52:22.235877 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 62 MiB 18 GiB 7.51 1.28 200 up osd.0 2026-02-18 04:52:22.235888 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 864 MiB 803 MiB 1 KiB 62 MiB 19 GiB 4.22 0.72 190 up osd.4 2026-02-18 04:52:22.235899 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-02-18 04:52:22.235910 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.77 1.15 197 up osd.1 2026-02-18 04:52:22.235948 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1017 MiB 955 MiB 1 KiB 62 MiB 19 GiB 4.97 0.84 191 up osd.5 2026-02-18 04:52:22.235959 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.01 - host testbed-node-5 2026-02-18 04:52:22.235971 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.43 1.09 192 up osd.2 2026-02-18 04:52:22.235983 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 78 MiB 19 GiB 5.40 0.92 200 up osd.3 2026-02-18 04:52:22.235993 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 390 MiB 113 GiB 5.88 2026-02-18 04:52:22.236005 | orchestrator | MIN/MAX VAR: 0.72/1.28 STDDEV: 1.12 2026-02-18 04:52:22.280250 | orchestrator | 2026-02-18 04:52:22.280343 | orchestrator | # Ceph monitor status 2026-02-18 04:52:22.280361 | orchestrator | 2026-02-18 04:52:22.280375 | orchestrator | + echo 2026-02-18 04:52:22.280388 | orchestrator | + echo '# Ceph monitor status' 2026-02-18 04:52:22.280401 | orchestrator | + echo 2026-02-18 04:52:22.280414 | orchestrator | + ceph mon stat 2026-02-18 04:52:22.855565 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-18 04:52:22.901211 | orchestrator | 2026-02-18 04:52:22.901284 | orchestrator | # Ceph quorum status 2026-02-18 04:52:22.901296 | orchestrator | 2026-02-18 04:52:22.901305 | orchestrator | + echo 2026-02-18 04:52:22.901314 | orchestrator | + echo '# Ceph quorum status' 2026-02-18 04:52:22.901324 | orchestrator | + echo 2026-02-18 04:52:22.902205 | orchestrator | + ceph quorum_status 2026-02-18 04:52:22.902234 | orchestrator | + jq 2026-02-18 04:52:23.550259 | orchestrator | { 2026-02-18 04:52:23.550357 | orchestrator | "election_epoch": 8, 2026-02-18 04:52:23.550373 | orchestrator | "quorum": [ 2026-02-18 04:52:23.550385 | orchestrator | 0, 2026-02-18 04:52:23.550396 | orchestrator | 1, 2026-02-18 04:52:23.550408 | orchestrator | 2 2026-02-18 04:52:23.550419 | orchestrator | ], 2026-02-18 04:52:23.550430 | orchestrator | "quorum_names": [ 2026-02-18 04:52:23.550441 | orchestrator | "testbed-node-0", 2026-02-18 04:52:23.550452 | orchestrator | "testbed-node-1", 2026-02-18 04:52:23.550463 | orchestrator | "testbed-node-2" 2026-02-18 04:52:23.550474 | orchestrator | ], 2026-02-18 04:52:23.550485 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-18 04:52:23.550496 | orchestrator | "quorum_age": 4189, 2026-02-18 04:52:23.550507 | orchestrator | "features": { 2026-02-18 04:52:23.550518 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-18 04:52:23.550529 | orchestrator | "quorum_mon": [ 2026-02-18 04:52:23.550540 | orchestrator | "kraken", 2026-02-18 04:52:23.550551 | orchestrator | "luminous", 2026-02-18 04:52:23.550562 | orchestrator | "mimic", 2026-02-18 04:52:23.550573 | orchestrator | "osdmap-prune", 2026-02-18 04:52:23.550583 | orchestrator | "nautilus", 2026-02-18 04:52:23.550594 | orchestrator | "octopus", 2026-02-18 04:52:23.550605 | orchestrator | "pacific", 2026-02-18 04:52:23.550616 | orchestrator | "elector-pinging", 2026-02-18 04:52:23.550777 | orchestrator | "quincy", 2026-02-18 04:52:23.550794 | orchestrator | "reef" 2026-02-18 04:52:23.550806 | orchestrator | ] 2026-02-18 04:52:23.550818 | orchestrator | }, 2026-02-18 04:52:23.550831 | orchestrator | "monmap": { 2026-02-18 04:52:23.550843 | orchestrator | "epoch": 1, 2026-02-18 04:52:23.550855 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-18 04:52:23.550869 | orchestrator | "modified": "2026-02-18T03:42:17.097780Z", 2026-02-18 04:52:23.550881 | orchestrator | "created": "2026-02-18T03:42:17.097780Z", 2026-02-18 04:52:23.550893 | orchestrator | "min_mon_release": 18, 2026-02-18 04:52:23.550906 | orchestrator | "min_mon_release_name": "reef", 2026-02-18 04:52:23.550918 | orchestrator | "election_strategy": 1, 2026-02-18 04:52:23.550930 | orchestrator | "disallowed_leaders: ": "", 2026-02-18 04:52:23.550942 | orchestrator | "stretch_mode": false, 2026-02-18 04:52:23.550954 | orchestrator | "tiebreaker_mon": "", 2026-02-18 04:52:23.550966 | orchestrator | "removed_ranks: ": "", 2026-02-18 04:52:23.550978 | orchestrator | "features": { 2026-02-18 04:52:23.550989 | orchestrator | "persistent": [ 2026-02-18 04:52:23.551001 | orchestrator | "kraken", 2026-02-18 04:52:23.551042 | orchestrator | "luminous", 2026-02-18 04:52:23.551054 | orchestrator | "mimic", 2026-02-18 04:52:23.551064 | orchestrator | "osdmap-prune", 2026-02-18 04:52:23.551075 | orchestrator | "nautilus", 2026-02-18 04:52:23.551086 | orchestrator | "octopus", 2026-02-18 04:52:23.551097 | orchestrator | "pacific", 2026-02-18 04:52:23.551132 | orchestrator | "elector-pinging", 2026-02-18 04:52:23.551143 | orchestrator | "quincy", 2026-02-18 04:52:23.551154 | orchestrator | "reef" 2026-02-18 04:52:23.551165 | orchestrator | ], 2026-02-18 04:52:23.551176 | orchestrator | "optional": [] 2026-02-18 04:52:23.551187 | orchestrator | }, 2026-02-18 04:52:23.551198 | orchestrator | "mons": [ 2026-02-18 04:52:23.551224 | orchestrator | { 2026-02-18 04:52:23.551236 | orchestrator | "rank": 0, 2026-02-18 04:52:23.551247 | orchestrator | "name": "testbed-node-0", 2026-02-18 04:52:23.551257 | orchestrator | "public_addrs": { 2026-02-18 04:52:23.551268 | orchestrator | "addrvec": [ 2026-02-18 04:52:23.551279 | orchestrator | { 2026-02-18 04:52:23.551290 | orchestrator | "type": "v2", 2026-02-18 04:52:23.551301 | orchestrator | "addr": "192.168.16.8:3300", 2026-02-18 04:52:23.551312 | orchestrator | "nonce": 0 2026-02-18 04:52:23.551323 | orchestrator | }, 2026-02-18 04:52:23.551334 | orchestrator | { 2026-02-18 04:52:23.551345 | orchestrator | "type": "v1", 2026-02-18 04:52:23.551356 | orchestrator | "addr": "192.168.16.8:6789", 2026-02-18 04:52:23.551367 | orchestrator | "nonce": 0 2026-02-18 04:52:23.551378 | orchestrator | } 2026-02-18 04:52:23.551397 | orchestrator | ] 2026-02-18 04:52:23.551416 | orchestrator | }, 2026-02-18 04:52:23.551436 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-02-18 04:52:23.551455 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-02-18 04:52:23.551473 | orchestrator | "priority": 0, 2026-02-18 04:52:23.551492 | orchestrator | "weight": 0, 2026-02-18 04:52:23.551511 | orchestrator | "crush_location": "{}" 2026-02-18 04:52:23.551528 | orchestrator | }, 2026-02-18 04:52:23.551547 | orchestrator | { 2026-02-18 04:52:23.551567 | orchestrator | "rank": 1, 2026-02-18 04:52:23.551586 | orchestrator | "name": "testbed-node-1", 2026-02-18 04:52:23.551604 | orchestrator | "public_addrs": { 2026-02-18 04:52:23.551624 | orchestrator | "addrvec": [ 2026-02-18 04:52:23.551644 | orchestrator | { 2026-02-18 04:52:23.551661 | orchestrator | "type": "v2", 2026-02-18 04:52:23.551672 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-18 04:52:23.551682 | orchestrator | "nonce": 0 2026-02-18 04:52:23.551693 | orchestrator | }, 2026-02-18 04:52:23.551704 | orchestrator | { 2026-02-18 04:52:23.551715 | orchestrator | "type": "v1", 2026-02-18 04:52:23.551725 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-18 04:52:23.551736 | orchestrator | "nonce": 0 2026-02-18 04:52:23.551747 | orchestrator | } 2026-02-18 04:52:23.551758 | orchestrator | ] 2026-02-18 04:52:23.551769 | orchestrator | }, 2026-02-18 04:52:23.551780 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-18 04:52:23.551791 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-18 04:52:23.551801 | orchestrator | "priority": 0, 2026-02-18 04:52:23.551812 | orchestrator | "weight": 0, 2026-02-18 04:52:23.551823 | orchestrator | "crush_location": "{}" 2026-02-18 04:52:23.551834 | orchestrator | }, 2026-02-18 04:52:23.551845 | orchestrator | { 2026-02-18 04:52:23.551855 | orchestrator | "rank": 2, 2026-02-18 04:52:23.551866 | orchestrator | "name": "testbed-node-2", 2026-02-18 04:52:23.551877 | orchestrator | "public_addrs": { 2026-02-18 04:52:23.551888 | orchestrator | "addrvec": [ 2026-02-18 04:52:23.551898 | orchestrator | { 2026-02-18 04:52:23.551909 | orchestrator | "type": "v2", 2026-02-18 04:52:23.551920 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-18 04:52:23.551931 | orchestrator | "nonce": 0 2026-02-18 04:52:23.551942 | orchestrator | }, 2026-02-18 04:52:23.551953 | orchestrator | { 2026-02-18 04:52:23.551963 | orchestrator | "type": "v1", 2026-02-18 04:52:23.551974 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-18 04:52:23.551985 | orchestrator | "nonce": 0 2026-02-18 04:52:23.551996 | orchestrator | } 2026-02-18 04:52:23.552006 | orchestrator | ] 2026-02-18 04:52:23.552017 | orchestrator | }, 2026-02-18 04:52:23.552028 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-18 04:52:23.552039 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-18 04:52:23.552050 | orchestrator | "priority": 0, 2026-02-18 04:52:23.552072 | orchestrator | "weight": 0, 2026-02-18 04:52:23.552083 | orchestrator | "crush_location": "{}" 2026-02-18 04:52:23.552094 | orchestrator | } 2026-02-18 04:52:23.552127 | orchestrator | ] 2026-02-18 04:52:23.552138 | orchestrator | } 2026-02-18 04:52:23.552149 | orchestrator | } 2026-02-18 04:52:23.552175 | orchestrator | 2026-02-18 04:52:23.552187 | orchestrator | # Ceph free space status 2026-02-18 04:52:23.552198 | orchestrator | 2026-02-18 04:52:23.552209 | orchestrator | + echo 2026-02-18 04:52:23.552220 | orchestrator | + echo '# Ceph free space status' 2026-02-18 04:52:23.552231 | orchestrator | + echo 2026-02-18 04:52:23.552242 | orchestrator | + ceph df 2026-02-18 04:52:24.130167 | orchestrator | --- RAW STORAGE --- 2026-02-18 04:52:24.130269 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-18 04:52:24.130296 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-02-18 04:52:24.130320 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-02-18 04:52:24.130331 | orchestrator | 2026-02-18 04:52:24.130343 | orchestrator | --- POOLS --- 2026-02-18 04:52:24.130355 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-18 04:52:24.130367 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-18 04:52:24.130378 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-18 04:52:24.130389 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-18 04:52:24.130400 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-18 04:52:24.130410 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-18 04:52:24.130422 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-18 04:52:24.130433 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-18 04:52:24.130444 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-18 04:52:24.130455 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 52 GiB 2026-02-18 04:52:24.130466 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-18 04:52:24.130477 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-18 04:52:24.130487 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-02-18 04:52:24.130498 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-18 04:52:24.130509 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-18 04:52:24.178499 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-18 04:52:24.235322 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-18 04:52:24.235412 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-18 04:52:24.235430 | orchestrator | + osism apply facts 2026-02-18 04:52:26.261148 | orchestrator | 2026-02-18 04:52:26 | INFO  | Task 28c3c4a8-5ed2-438d-85c8-e48ce2fe9041 (facts) was prepared for execution. 2026-02-18 04:52:26.261255 | orchestrator | 2026-02-18 04:52:26 | INFO  | It takes a moment until task 28c3c4a8-5ed2-438d-85c8-e48ce2fe9041 (facts) has been started and output is visible here. 2026-02-18 04:52:39.991226 | orchestrator | 2026-02-18 04:52:39.991346 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-18 04:52:39.991363 | orchestrator | 2026-02-18 04:52:39.991376 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-18 04:52:39.991388 | orchestrator | Wednesday 18 February 2026 04:52:30 +0000 (0:00:00.270) 0:00:00.270 **** 2026-02-18 04:52:39.991399 | orchestrator | ok: [testbed-manager] 2026-02-18 04:52:39.991411 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:52:39.991422 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:52:39.991433 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:52:39.991444 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:52:39.991455 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:52:39.991466 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:52:39.991476 | orchestrator | 2026-02-18 04:52:39.991487 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-18 04:52:39.991524 | orchestrator | Wednesday 18 February 2026 04:52:32 +0000 (0:00:01.236) 0:00:01.507 **** 2026-02-18 04:52:39.991535 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:52:39.991547 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:52:39.991557 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:52:39.991568 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:52:39.991579 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:52:39.991589 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:52:39.991600 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:52:39.991664 | orchestrator | 2026-02-18 04:52:39.991678 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-18 04:52:39.991690 | orchestrator | 2026-02-18 04:52:39.991703 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 04:52:39.991716 | orchestrator | Wednesday 18 February 2026 04:52:33 +0000 (0:00:01.363) 0:00:02.870 **** 2026-02-18 04:52:39.991728 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:52:39.991741 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:52:39.991753 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:52:39.991765 | orchestrator | ok: [testbed-manager] 2026-02-18 04:52:39.991777 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:52:39.991790 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:52:39.991802 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:52:39.991814 | orchestrator | 2026-02-18 04:52:39.991826 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-18 04:52:39.991839 | orchestrator | 2026-02-18 04:52:39.991852 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-18 04:52:39.991865 | orchestrator | Wednesday 18 February 2026 04:52:38 +0000 (0:00:05.432) 0:00:08.303 **** 2026-02-18 04:52:39.991877 | orchestrator | skipping: [testbed-manager] 2026-02-18 04:52:39.991890 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:52:39.991901 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:52:39.991911 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:52:39.991922 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:52:39.991933 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:52:39.991943 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:52:39.991954 | orchestrator | 2026-02-18 04:52:39.991973 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:52:39.991991 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:52:39.992011 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:52:39.992029 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:52:39.992065 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:52:39.992126 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:52:39.992149 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:52:39.992167 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:52:39.992187 | orchestrator | 2026-02-18 04:52:39.992206 | orchestrator | 2026-02-18 04:52:39.992223 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:52:39.992242 | orchestrator | Wednesday 18 February 2026 04:52:39 +0000 (0:00:00.603) 0:00:08.907 **** 2026-02-18 04:52:39.992260 | orchestrator | =============================================================================== 2026-02-18 04:52:39.992278 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.43s 2026-02-18 04:52:39.992312 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-02-18 04:52:39.992332 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-02-18 04:52:39.992350 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-02-18 04:52:40.380908 | orchestrator | + osism validate ceph-mons 2026-02-18 04:53:14.057188 | orchestrator | 2026-02-18 04:53:14.057312 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-18 04:53:14.057330 | orchestrator | 2026-02-18 04:53:14.057342 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-18 04:53:14.057356 | orchestrator | Wednesday 18 February 2026 04:52:57 +0000 (0:00:00.458) 0:00:00.458 **** 2026-02-18 04:53:14.057368 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:14.057379 | orchestrator | 2026-02-18 04:53:14.057391 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-18 04:53:14.057402 | orchestrator | Wednesday 18 February 2026 04:52:58 +0000 (0:00:00.931) 0:00:01.389 **** 2026-02-18 04:53:14.057413 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:14.057424 | orchestrator | 2026-02-18 04:53:14.057435 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-18 04:53:14.057446 | orchestrator | Wednesday 18 February 2026 04:52:59 +0000 (0:00:01.141) 0:00:02.530 **** 2026-02-18 04:53:14.057458 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.057470 | orchestrator | 2026-02-18 04:53:14.057481 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-18 04:53:14.057492 | orchestrator | Wednesday 18 February 2026 04:52:59 +0000 (0:00:00.127) 0:00:02.658 **** 2026-02-18 04:53:14.057504 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.057515 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:14.057526 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:14.057537 | orchestrator | 2026-02-18 04:53:14.057548 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-18 04:53:14.057560 | orchestrator | Wednesday 18 February 2026 04:52:59 +0000 (0:00:00.300) 0:00:02.958 **** 2026-02-18 04:53:14.057571 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.057582 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:14.057593 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:14.057604 | orchestrator | 2026-02-18 04:53:14.057615 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-18 04:53:14.057626 | orchestrator | Wednesday 18 February 2026 04:53:00 +0000 (0:00:01.064) 0:00:04.023 **** 2026-02-18 04:53:14.057638 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.057649 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:53:14.057662 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:53:14.057674 | orchestrator | 2026-02-18 04:53:14.057687 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-18 04:53:14.057699 | orchestrator | Wednesday 18 February 2026 04:53:01 +0000 (0:00:00.375) 0:00:04.398 **** 2026-02-18 04:53:14.057712 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.057724 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:14.057737 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:14.057749 | orchestrator | 2026-02-18 04:53:14.057762 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-18 04:53:14.057775 | orchestrator | Wednesday 18 February 2026 04:53:01 +0000 (0:00:00.569) 0:00:04.967 **** 2026-02-18 04:53:14.057787 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.057800 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:14.057813 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:14.057825 | orchestrator | 2026-02-18 04:53:14.057837 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-18 04:53:14.057849 | orchestrator | Wednesday 18 February 2026 04:53:02 +0000 (0:00:00.321) 0:00:05.289 **** 2026-02-18 04:53:14.057862 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.057898 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:53:14.057911 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:53:14.057924 | orchestrator | 2026-02-18 04:53:14.057937 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-18 04:53:14.057950 | orchestrator | Wednesday 18 February 2026 04:53:02 +0000 (0:00:00.319) 0:00:05.608 **** 2026-02-18 04:53:14.057963 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.057976 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:14.057989 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:14.058001 | orchestrator | 2026-02-18 04:53:14.058014 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-18 04:53:14.058132 | orchestrator | Wednesday 18 February 2026 04:53:03 +0000 (0:00:00.542) 0:00:06.150 **** 2026-02-18 04:53:14.058150 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.058163 | orchestrator | 2026-02-18 04:53:14.058174 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-18 04:53:14.058185 | orchestrator | Wednesday 18 February 2026 04:53:03 +0000 (0:00:00.279) 0:00:06.430 **** 2026-02-18 04:53:14.058196 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.058207 | orchestrator | 2026-02-18 04:53:14.058217 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-18 04:53:14.058228 | orchestrator | Wednesday 18 February 2026 04:53:03 +0000 (0:00:00.284) 0:00:06.714 **** 2026-02-18 04:53:14.058282 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.058297 | orchestrator | 2026-02-18 04:53:14.058308 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:14.058319 | orchestrator | Wednesday 18 February 2026 04:53:03 +0000 (0:00:00.277) 0:00:06.991 **** 2026-02-18 04:53:14.058330 | orchestrator | 2026-02-18 04:53:14.058341 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:14.058352 | orchestrator | Wednesday 18 February 2026 04:53:04 +0000 (0:00:00.076) 0:00:07.067 **** 2026-02-18 04:53:14.058363 | orchestrator | 2026-02-18 04:53:14.058373 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:14.058384 | orchestrator | Wednesday 18 February 2026 04:53:04 +0000 (0:00:00.092) 0:00:07.160 **** 2026-02-18 04:53:14.058394 | orchestrator | 2026-02-18 04:53:14.058405 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-18 04:53:14.058416 | orchestrator | Wednesday 18 February 2026 04:53:04 +0000 (0:00:00.078) 0:00:07.239 **** 2026-02-18 04:53:14.058427 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.058438 | orchestrator | 2026-02-18 04:53:14.058448 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-18 04:53:14.058477 | orchestrator | Wednesday 18 February 2026 04:53:04 +0000 (0:00:00.263) 0:00:07.502 **** 2026-02-18 04:53:14.058489 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.058500 | orchestrator | 2026-02-18 04:53:14.058530 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-18 04:53:14.058542 | orchestrator | Wednesday 18 February 2026 04:53:04 +0000 (0:00:00.268) 0:00:07.771 **** 2026-02-18 04:53:14.058553 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.058564 | orchestrator | 2026-02-18 04:53:14.058575 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-18 04:53:14.058586 | orchestrator | Wednesday 18 February 2026 04:53:04 +0000 (0:00:00.152) 0:00:07.923 **** 2026-02-18 04:53:14.058597 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:53:14.058612 | orchestrator | 2026-02-18 04:53:14.058623 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-18 04:53:14.058634 | orchestrator | Wednesday 18 February 2026 04:53:06 +0000 (0:00:01.623) 0:00:09.546 **** 2026-02-18 04:53:14.058645 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.058655 | orchestrator | 2026-02-18 04:53:14.058666 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-18 04:53:14.058677 | orchestrator | Wednesday 18 February 2026 04:53:07 +0000 (0:00:00.646) 0:00:10.193 **** 2026-02-18 04:53:14.058700 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.058711 | orchestrator | 2026-02-18 04:53:14.058721 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-18 04:53:14.058732 | orchestrator | Wednesday 18 February 2026 04:53:07 +0000 (0:00:00.134) 0:00:10.328 **** 2026-02-18 04:53:14.058743 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.058768 | orchestrator | 2026-02-18 04:53:14.058791 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-18 04:53:14.058802 | orchestrator | Wednesday 18 February 2026 04:53:07 +0000 (0:00:00.346) 0:00:10.674 **** 2026-02-18 04:53:14.058813 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.058823 | orchestrator | 2026-02-18 04:53:14.058834 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-18 04:53:14.058845 | orchestrator | Wednesday 18 February 2026 04:53:07 +0000 (0:00:00.336) 0:00:11.011 **** 2026-02-18 04:53:14.058856 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.058867 | orchestrator | 2026-02-18 04:53:14.058877 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-18 04:53:14.058888 | orchestrator | Wednesday 18 February 2026 04:53:08 +0000 (0:00:00.129) 0:00:11.140 **** 2026-02-18 04:53:14.058899 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.058910 | orchestrator | 2026-02-18 04:53:14.058921 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-18 04:53:14.058932 | orchestrator | Wednesday 18 February 2026 04:53:08 +0000 (0:00:00.137) 0:00:11.277 **** 2026-02-18 04:53:14.058942 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.058953 | orchestrator | 2026-02-18 04:53:14.058964 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-18 04:53:14.058975 | orchestrator | Wednesday 18 February 2026 04:53:08 +0000 (0:00:00.123) 0:00:11.401 **** 2026-02-18 04:53:14.058985 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:53:14.058996 | orchestrator | 2026-02-18 04:53:14.059007 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-18 04:53:14.059018 | orchestrator | Wednesday 18 February 2026 04:53:09 +0000 (0:00:01.364) 0:00:12.766 **** 2026-02-18 04:53:14.059028 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.059039 | orchestrator | 2026-02-18 04:53:14.059050 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-18 04:53:14.059087 | orchestrator | Wednesday 18 February 2026 04:53:10 +0000 (0:00:00.331) 0:00:13.097 **** 2026-02-18 04:53:14.059099 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.059110 | orchestrator | 2026-02-18 04:53:14.059120 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-18 04:53:14.059131 | orchestrator | Wednesday 18 February 2026 04:53:10 +0000 (0:00:00.176) 0:00:13.274 **** 2026-02-18 04:53:14.059142 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:14.059153 | orchestrator | 2026-02-18 04:53:14.059164 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-18 04:53:14.059175 | orchestrator | Wednesday 18 February 2026 04:53:10 +0000 (0:00:00.140) 0:00:13.414 **** 2026-02-18 04:53:14.059186 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.059197 | orchestrator | 2026-02-18 04:53:14.059208 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-18 04:53:14.059219 | orchestrator | Wednesday 18 February 2026 04:53:10 +0000 (0:00:00.160) 0:00:13.575 **** 2026-02-18 04:53:14.059235 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.059247 | orchestrator | 2026-02-18 04:53:14.059257 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-18 04:53:14.059268 | orchestrator | Wednesday 18 February 2026 04:53:10 +0000 (0:00:00.397) 0:00:13.972 **** 2026-02-18 04:53:14.059279 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:14.059290 | orchestrator | 2026-02-18 04:53:14.059301 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-18 04:53:14.059312 | orchestrator | Wednesday 18 February 2026 04:53:11 +0000 (0:00:00.278) 0:00:14.250 **** 2026-02-18 04:53:14.059335 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:14.059353 | orchestrator | 2026-02-18 04:53:14.059370 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-18 04:53:14.059388 | orchestrator | Wednesday 18 February 2026 04:53:11 +0000 (0:00:00.262) 0:00:14.513 **** 2026-02-18 04:53:14.059408 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:14.059426 | orchestrator | 2026-02-18 04:53:14.059444 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-18 04:53:14.059456 | orchestrator | Wednesday 18 February 2026 04:53:13 +0000 (0:00:01.793) 0:00:16.306 **** 2026-02-18 04:53:14.059516 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:14.059529 | orchestrator | 2026-02-18 04:53:14.059540 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-18 04:53:14.059551 | orchestrator | Wednesday 18 February 2026 04:53:13 +0000 (0:00:00.275) 0:00:16.582 **** 2026-02-18 04:53:14.059562 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:14.059572 | orchestrator | 2026-02-18 04:53:14.059593 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:16.934887 | orchestrator | Wednesday 18 February 2026 04:53:13 +0000 (0:00:00.271) 0:00:16.853 **** 2026-02-18 04:53:16.935039 | orchestrator | 2026-02-18 04:53:16.935098 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:16.935112 | orchestrator | Wednesday 18 February 2026 04:53:13 +0000 (0:00:00.087) 0:00:16.940 **** 2026-02-18 04:53:16.935122 | orchestrator | 2026-02-18 04:53:16.935133 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:16.935146 | orchestrator | Wednesday 18 February 2026 04:53:13 +0000 (0:00:00.070) 0:00:17.011 **** 2026-02-18 04:53:16.935164 | orchestrator | 2026-02-18 04:53:16.935181 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-18 04:53:16.935198 | orchestrator | Wednesday 18 February 2026 04:53:14 +0000 (0:00:00.088) 0:00:17.100 **** 2026-02-18 04:53:16.935216 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:16.935232 | orchestrator | 2026-02-18 04:53:16.935249 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-18 04:53:16.935266 | orchestrator | Wednesday 18 February 2026 04:53:15 +0000 (0:00:01.550) 0:00:18.650 **** 2026-02-18 04:53:16.935283 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-18 04:53:16.935301 | orchestrator |  "msg": [ 2026-02-18 04:53:16.935321 | orchestrator |  "Validator run completed.", 2026-02-18 04:53:16.935339 | orchestrator |  "You can find the report file here:", 2026-02-18 04:53:16.935358 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-18T04:52:58+00:00-report.json", 2026-02-18 04:53:16.935377 | orchestrator |  "on the following host:", 2026-02-18 04:53:16.935395 | orchestrator |  "testbed-manager" 2026-02-18 04:53:16.935408 | orchestrator |  ] 2026-02-18 04:53:16.935420 | orchestrator | } 2026-02-18 04:53:16.935432 | orchestrator | 2026-02-18 04:53:16.935443 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:53:16.935457 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-18 04:53:16.935470 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:53:16.935483 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:53:16.935494 | orchestrator | 2026-02-18 04:53:16.935505 | orchestrator | 2026-02-18 04:53:16.935516 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:53:16.935527 | orchestrator | Wednesday 18 February 2026 04:53:16 +0000 (0:00:00.937) 0:00:19.588 **** 2026-02-18 04:53:16.935573 | orchestrator | =============================================================================== 2026-02-18 04:53:16.935586 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2026-02-18 04:53:16.935597 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.62s 2026-02-18 04:53:16.935609 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2026-02-18 04:53:16.935626 | orchestrator | Gather status data ------------------------------------------------------ 1.36s 2026-02-18 04:53:16.935642 | orchestrator | Create report output directory ------------------------------------------ 1.14s 2026-02-18 04:53:16.935659 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2026-02-18 04:53:16.935673 | orchestrator | Print report file information ------------------------------------------- 0.94s 2026-02-18 04:53:16.935688 | orchestrator | Get timestamp for report file ------------------------------------------- 0.93s 2026-02-18 04:53:16.935703 | orchestrator | Set quorum test data ---------------------------------------------------- 0.65s 2026-02-18 04:53:16.935718 | orchestrator | Set test result to passed if container is existing ---------------------- 0.57s 2026-02-18 04:53:16.935755 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.54s 2026-02-18 04:53:16.935772 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.40s 2026-02-18 04:53:16.935786 | orchestrator | Set test result to failed if container is missing ----------------------- 0.38s 2026-02-18 04:53:16.935802 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2026-02-18 04:53:16.935818 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2026-02-18 04:53:16.935834 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2026-02-18 04:53:16.935848 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-02-18 04:53:16.935864 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.32s 2026-02-18 04:53:16.935878 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-18 04:53:16.935894 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-02-18 04:53:17.276022 | orchestrator | + osism validate ceph-mgrs 2026-02-18 04:53:48.833190 | orchestrator | 2026-02-18 04:53:48.833309 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-18 04:53:48.833318 | orchestrator | 2026-02-18 04:53:48.833324 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-18 04:53:48.833330 | orchestrator | Wednesday 18 February 2026 04:53:34 +0000 (0:00:00.435) 0:00:00.435 **** 2026-02-18 04:53:48.833336 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:48.833341 | orchestrator | 2026-02-18 04:53:48.833346 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-18 04:53:48.833351 | orchestrator | Wednesday 18 February 2026 04:53:34 +0000 (0:00:00.887) 0:00:01.322 **** 2026-02-18 04:53:48.833356 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:48.833361 | orchestrator | 2026-02-18 04:53:48.833366 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-18 04:53:48.833371 | orchestrator | Wednesday 18 February 2026 04:53:36 +0000 (0:00:01.063) 0:00:02.386 **** 2026-02-18 04:53:48.833375 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833381 | orchestrator | 2026-02-18 04:53:48.833386 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-18 04:53:48.833390 | orchestrator | Wednesday 18 February 2026 04:53:36 +0000 (0:00:00.123) 0:00:02.510 **** 2026-02-18 04:53:48.833395 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833400 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:48.833404 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:48.833409 | orchestrator | 2026-02-18 04:53:48.833413 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-18 04:53:48.833418 | orchestrator | Wednesday 18 February 2026 04:53:36 +0000 (0:00:00.299) 0:00:02.809 **** 2026-02-18 04:53:48.833443 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833448 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:48.833453 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:48.833457 | orchestrator | 2026-02-18 04:53:48.833462 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-18 04:53:48.833466 | orchestrator | Wednesday 18 February 2026 04:53:37 +0000 (0:00:01.017) 0:00:03.827 **** 2026-02-18 04:53:48.833471 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833476 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:53:48.833480 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:53:48.833485 | orchestrator | 2026-02-18 04:53:48.833489 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-18 04:53:48.833494 | orchestrator | Wednesday 18 February 2026 04:53:37 +0000 (0:00:00.285) 0:00:04.113 **** 2026-02-18 04:53:48.833499 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833504 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:48.833508 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:48.833513 | orchestrator | 2026-02-18 04:53:48.833517 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-18 04:53:48.833522 | orchestrator | Wednesday 18 February 2026 04:53:38 +0000 (0:00:00.539) 0:00:04.653 **** 2026-02-18 04:53:48.833526 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833531 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:48.833535 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:48.833540 | orchestrator | 2026-02-18 04:53:48.833545 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-18 04:53:48.833549 | orchestrator | Wednesday 18 February 2026 04:53:38 +0000 (0:00:00.314) 0:00:04.967 **** 2026-02-18 04:53:48.833554 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833558 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:53:48.833563 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:53:48.833567 | orchestrator | 2026-02-18 04:53:48.833572 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-18 04:53:48.833576 | orchestrator | Wednesday 18 February 2026 04:53:38 +0000 (0:00:00.278) 0:00:05.246 **** 2026-02-18 04:53:48.833581 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833586 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:53:48.833590 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:53:48.833595 | orchestrator | 2026-02-18 04:53:48.833599 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-18 04:53:48.833604 | orchestrator | Wednesday 18 February 2026 04:53:39 +0000 (0:00:00.593) 0:00:05.839 **** 2026-02-18 04:53:48.833608 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833613 | orchestrator | 2026-02-18 04:53:48.833617 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-18 04:53:48.833622 | orchestrator | Wednesday 18 February 2026 04:53:39 +0000 (0:00:00.283) 0:00:06.123 **** 2026-02-18 04:53:48.833628 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833635 | orchestrator | 2026-02-18 04:53:48.833643 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-18 04:53:48.833650 | orchestrator | Wednesday 18 February 2026 04:53:39 +0000 (0:00:00.252) 0:00:06.375 **** 2026-02-18 04:53:48.833658 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833665 | orchestrator | 2026-02-18 04:53:48.833672 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:48.833680 | orchestrator | Wednesday 18 February 2026 04:53:40 +0000 (0:00:00.254) 0:00:06.630 **** 2026-02-18 04:53:48.833687 | orchestrator | 2026-02-18 04:53:48.833694 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:48.833701 | orchestrator | Wednesday 18 February 2026 04:53:40 +0000 (0:00:00.073) 0:00:06.704 **** 2026-02-18 04:53:48.833709 | orchestrator | 2026-02-18 04:53:48.833717 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:48.833724 | orchestrator | Wednesday 18 February 2026 04:53:40 +0000 (0:00:00.076) 0:00:06.780 **** 2026-02-18 04:53:48.833740 | orchestrator | 2026-02-18 04:53:48.833747 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-18 04:53:48.833752 | orchestrator | Wednesday 18 February 2026 04:53:40 +0000 (0:00:00.079) 0:00:06.860 **** 2026-02-18 04:53:48.833760 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833767 | orchestrator | 2026-02-18 04:53:48.833776 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-18 04:53:48.833783 | orchestrator | Wednesday 18 February 2026 04:53:40 +0000 (0:00:00.251) 0:00:07.111 **** 2026-02-18 04:53:48.833791 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833798 | orchestrator | 2026-02-18 04:53:48.833826 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-18 04:53:48.833834 | orchestrator | Wednesday 18 February 2026 04:53:40 +0000 (0:00:00.254) 0:00:07.365 **** 2026-02-18 04:53:48.833842 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833851 | orchestrator | 2026-02-18 04:53:48.833856 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-18 04:53:48.833861 | orchestrator | Wednesday 18 February 2026 04:53:41 +0000 (0:00:00.129) 0:00:07.495 **** 2026-02-18 04:53:48.833866 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:53:48.833872 | orchestrator | 2026-02-18 04:53:48.833877 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-18 04:53:48.833883 | orchestrator | Wednesday 18 February 2026 04:53:42 +0000 (0:00:01.873) 0:00:09.369 **** 2026-02-18 04:53:48.833888 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833893 | orchestrator | 2026-02-18 04:53:48.833915 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-18 04:53:48.833920 | orchestrator | Wednesday 18 February 2026 04:53:43 +0000 (0:00:00.496) 0:00:09.865 **** 2026-02-18 04:53:48.833925 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833931 | orchestrator | 2026-02-18 04:53:48.833936 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-18 04:53:48.833941 | orchestrator | Wednesday 18 February 2026 04:53:43 +0000 (0:00:00.315) 0:00:10.181 **** 2026-02-18 04:53:48.833949 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.833956 | orchestrator | 2026-02-18 04:53:48.833964 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-18 04:53:48.833971 | orchestrator | Wednesday 18 February 2026 04:53:43 +0000 (0:00:00.153) 0:00:10.334 **** 2026-02-18 04:53:48.833979 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:53:48.833986 | orchestrator | 2026-02-18 04:53:48.833995 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-18 04:53:48.834003 | orchestrator | Wednesday 18 February 2026 04:53:44 +0000 (0:00:00.166) 0:00:10.500 **** 2026-02-18 04:53:48.834011 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:48.834080 | orchestrator | 2026-02-18 04:53:48.834089 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-18 04:53:48.834097 | orchestrator | Wednesday 18 February 2026 04:53:44 +0000 (0:00:00.264) 0:00:10.764 **** 2026-02-18 04:53:48.834105 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:53:48.834112 | orchestrator | 2026-02-18 04:53:48.834119 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-18 04:53:48.834126 | orchestrator | Wednesday 18 February 2026 04:53:44 +0000 (0:00:00.276) 0:00:11.041 **** 2026-02-18 04:53:48.834134 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:48.834142 | orchestrator | 2026-02-18 04:53:48.834150 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-18 04:53:48.834157 | orchestrator | Wednesday 18 February 2026 04:53:46 +0000 (0:00:01.364) 0:00:12.406 **** 2026-02-18 04:53:48.834164 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:48.834172 | orchestrator | 2026-02-18 04:53:48.834178 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-18 04:53:48.834184 | orchestrator | Wednesday 18 February 2026 04:53:46 +0000 (0:00:00.297) 0:00:12.703 **** 2026-02-18 04:53:48.834198 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:48.834206 | orchestrator | 2026-02-18 04:53:48.834212 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:48.834219 | orchestrator | Wednesday 18 February 2026 04:53:46 +0000 (0:00:00.265) 0:00:12.968 **** 2026-02-18 04:53:48.834226 | orchestrator | 2026-02-18 04:53:48.834233 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:48.834241 | orchestrator | Wednesday 18 February 2026 04:53:46 +0000 (0:00:00.072) 0:00:13.041 **** 2026-02-18 04:53:48.834248 | orchestrator | 2026-02-18 04:53:48.834256 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:53:48.834263 | orchestrator | Wednesday 18 February 2026 04:53:46 +0000 (0:00:00.071) 0:00:13.112 **** 2026-02-18 04:53:48.834271 | orchestrator | 2026-02-18 04:53:48.834278 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-18 04:53:48.834286 | orchestrator | Wednesday 18 February 2026 04:53:47 +0000 (0:00:00.297) 0:00:13.410 **** 2026-02-18 04:53:48.834293 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-18 04:53:48.834301 | orchestrator | 2026-02-18 04:53:48.834308 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-18 04:53:48.834316 | orchestrator | Wednesday 18 February 2026 04:53:48 +0000 (0:00:01.348) 0:00:14.759 **** 2026-02-18 04:53:48.834323 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-18 04:53:48.834331 | orchestrator |  "msg": [ 2026-02-18 04:53:48.834339 | orchestrator |  "Validator run completed.", 2026-02-18 04:53:48.834351 | orchestrator |  "You can find the report file here:", 2026-02-18 04:53:48.834359 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-18T04:53:34+00:00-report.json", 2026-02-18 04:53:48.834368 | orchestrator |  "on the following host:", 2026-02-18 04:53:48.834375 | orchestrator |  "testbed-manager" 2026-02-18 04:53:48.834383 | orchestrator |  ] 2026-02-18 04:53:48.834391 | orchestrator | } 2026-02-18 04:53:48.834398 | orchestrator | 2026-02-18 04:53:48.834406 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:53:48.834415 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-18 04:53:48.834424 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:53:48.834439 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:53:49.268029 | orchestrator | 2026-02-18 04:53:49.268195 | orchestrator | 2026-02-18 04:53:49.268210 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:53:49.268223 | orchestrator | Wednesday 18 February 2026 04:53:48 +0000 (0:00:00.430) 0:00:15.190 **** 2026-02-18 04:53:49.268232 | orchestrator | =============================================================================== 2026-02-18 04:53:49.268242 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.87s 2026-02-18 04:53:49.268252 | orchestrator | Aggregate test results step one ----------------------------------------- 1.36s 2026-02-18 04:53:49.268262 | orchestrator | Write report file ------------------------------------------------------- 1.35s 2026-02-18 04:53:49.268271 | orchestrator | Create report output directory ------------------------------------------ 1.06s 2026-02-18 04:53:49.268281 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2026-02-18 04:53:49.268290 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2026-02-18 04:53:49.268300 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.59s 2026-02-18 04:53:49.268310 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-02-18 04:53:49.268344 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.50s 2026-02-18 04:53:49.268354 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2026-02-18 04:53:49.268363 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-02-18 04:53:49.268373 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-02-18 04:53:49.268383 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-18 04:53:49.268392 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-18 04:53:49.268402 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-02-18 04:53:49.268411 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-02-18 04:53:49.268421 | orchestrator | Aggregate test results step one ----------------------------------------- 0.28s 2026-02-18 04:53:49.268431 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2026-02-18 04:53:49.268440 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-02-18 04:53:49.268450 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-02-18 04:53:49.663608 | orchestrator | + osism validate ceph-osds 2026-02-18 04:54:11.494674 | orchestrator | 2026-02-18 04:54:11.494790 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-18 04:54:11.494807 | orchestrator | 2026-02-18 04:54:11.494819 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-18 04:54:11.494830 | orchestrator | Wednesday 18 February 2026 04:54:06 +0000 (0:00:00.442) 0:00:00.442 **** 2026-02-18 04:54:11.494841 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:11.494853 | orchestrator | 2026-02-18 04:54:11.494863 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-18 04:54:11.494874 | orchestrator | Wednesday 18 February 2026 04:54:07 +0000 (0:00:00.926) 0:00:01.368 **** 2026-02-18 04:54:11.494885 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:11.494896 | orchestrator | 2026-02-18 04:54:11.494907 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-18 04:54:11.494918 | orchestrator | Wednesday 18 February 2026 04:54:08 +0000 (0:00:00.598) 0:00:01.967 **** 2026-02-18 04:54:11.494928 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:11.494939 | orchestrator | 2026-02-18 04:54:11.494950 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-18 04:54:11.494960 | orchestrator | Wednesday 18 February 2026 04:54:08 +0000 (0:00:00.759) 0:00:02.726 **** 2026-02-18 04:54:11.494971 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:11.494984 | orchestrator | 2026-02-18 04:54:11.494995 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-18 04:54:11.495006 | orchestrator | Wednesday 18 February 2026 04:54:08 +0000 (0:00:00.135) 0:00:02.861 **** 2026-02-18 04:54:11.495017 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:11.495083 | orchestrator | 2026-02-18 04:54:11.495094 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-18 04:54:11.495105 | orchestrator | Wednesday 18 February 2026 04:54:09 +0000 (0:00:00.178) 0:00:03.040 **** 2026-02-18 04:54:11.495116 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:11.495126 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:11.495137 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:11.495147 | orchestrator | 2026-02-18 04:54:11.495174 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-18 04:54:11.495185 | orchestrator | Wednesday 18 February 2026 04:54:09 +0000 (0:00:00.314) 0:00:03.355 **** 2026-02-18 04:54:11.495196 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:11.495206 | orchestrator | 2026-02-18 04:54:11.495218 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-18 04:54:11.495252 | orchestrator | Wednesday 18 February 2026 04:54:09 +0000 (0:00:00.195) 0:00:03.551 **** 2026-02-18 04:54:11.495267 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:11.495278 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:11.495290 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:11.495303 | orchestrator | 2026-02-18 04:54:11.495315 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-18 04:54:11.495328 | orchestrator | Wednesday 18 February 2026 04:54:09 +0000 (0:00:00.349) 0:00:03.901 **** 2026-02-18 04:54:11.495340 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:11.495352 | orchestrator | 2026-02-18 04:54:11.495364 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-18 04:54:11.495376 | orchestrator | Wednesday 18 February 2026 04:54:10 +0000 (0:00:00.945) 0:00:04.846 **** 2026-02-18 04:54:11.495388 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:11.495400 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:11.495412 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:11.495424 | orchestrator | 2026-02-18 04:54:11.495436 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-18 04:54:11.495448 | orchestrator | Wednesday 18 February 2026 04:54:11 +0000 (0:00:00.332) 0:00:05.179 **** 2026-02-18 04:54:11.495463 | orchestrator | skipping: [testbed-node-3] => (item={'id': '297289b97e72dca7418391f3c374dd920ff892847f2b6e6e03c2acdd30f2b4ef', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-18 04:54:11.495479 | orchestrator | skipping: [testbed-node-3] => (item={'id': '81d8b80acbc1b7dc4f543b1d421d8a17cb1fd41f9bb3614ea50867387660ea72', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-18 04:54:11.495494 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47baf0be8a00ec4f7f8f8f8d0cebe7e21b8bc080e0d0241bfca6b0375cfa00fd', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-18 04:54:11.495507 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f8bb13875b3926f14f3391c7f88420436ce8e890eb74331c69a1def81fdc5387', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-18 04:54:11.495520 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'abe70932d1a3b8c8fe48b9f0a6bb14b97160e8aa1fad6c746cf7b51b620d2038', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-18 04:54:11.495556 | orchestrator | skipping: [testbed-node-3] => (item={'id': '507fbcb2909b20d1883d6f6b8f04f3a288a5a4172c7ed05b7c9e17a4f564c0b7', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-18 04:54:11.495570 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0851dd0f79996ca0e8a6bd2d61b18f4a916f19d9f6c1d0d2c52fd20438ed9b96', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-18 04:54:11.495583 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c3250353bf564fe85997bbd56dc0f59c72eead102f107ba9bf717f379d140c5b', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-18 04:54:11.495594 | orchestrator | skipping: [testbed-node-3] => (item={'id': '71b37714ab2e3a444bde733f6daef75e2735703de414e59f275a4e053a98c9fc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.495615 | orchestrator | skipping: [testbed-node-3] => (item={'id': '50c74165a05aadafecff838b0c09d432ddb4fad793e5f38e73bfdb9d4aee04ef', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.495627 | orchestrator | skipping: [testbed-node-3] => (item={'id': '97a3240af47bd6ea97f11b36b1627fa200461a5bbbc09185fc8171d550f9262e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.495639 | orchestrator | ok: [testbed-node-3] => (item={'id': '9b110ed068fa4e7600234742ed3a53c89b38b1dc4e3465898a87210f93cad41d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-18 04:54:11.495650 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e604e622a4e6c99b9146e1565ad9b77a48ac44dcd68e4615fd2cad220ad615cb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-18 04:54:11.495662 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4fb494c3f20bc439ba3c701147ff9b95d7b7aad08b5eb674206b2b4f545c93cc', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.495673 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f891b35667060d247ff6ff7de7f706d4bb9ddd15c135579ac0b76e85723b6717', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-18 04:54:11.495684 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69949afd6279346619cdaf9b37a83251337796cc927b75affe400013aab2d218', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-18 04:54:11.495695 | orchestrator | skipping: [testbed-node-3] => (item={'id': '83cbca84e1029b3938fda18ac9a5228eaad5b4da8bea3f119e3016fe84368c0e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:11.495706 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6ad9c4166fc21136d436ec6129ed83d0b3e0aa4ba791f9923dd455fa6033590c', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:11.495717 | orchestrator | skipping: [testbed-node-3] => (item={'id': '423265fc216065e41eb886f68cd31c7521205ed1852684dd376cca1f6cc2f63f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:11.495729 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33e3ff74d411b498f31280b26abb2f4dabb7a80570f086424edf42d05b64e430', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-18 04:54:11.495746 | orchestrator | skipping: [testbed-node-4] => (item={'id': '45dba07fdb9f18199eb99b5995d999a35b1383a495c6107d609e24f1f365e5d3', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-18 04:54:11.767661 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd09b2ca72efbd48bcc965ddda83549e651711cb0e258455bd5e1ee1859f22149', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-18 04:54:11.767800 | orchestrator | skipping: [testbed-node-4] => (item={'id': '11421a9e9fe1e026f9359193b03fe4095ea9cee209a943a1b46b3327e9834a45', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-18 04:54:11.767840 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd0bbb434b706d49852e850d7958c99c19493dc7bf01c7a8abd54c7476beb3408', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-18 04:54:11.767854 | orchestrator | skipping: [testbed-node-4] => (item={'id': '586c0a5b5dd18953735fef4b7768d9654eaa64eb25b9fc3b3563be91d6da7c4d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-18 04:54:11.767871 | orchestrator | skipping: [testbed-node-4] => (item={'id': '20d9a0124964f85d65e8903087ca45d7b3cc9691dbf810c9a006ba947cb95428', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-18 04:54:11.767882 | orchestrator | skipping: [testbed-node-4] => (item={'id': '60d0971184b3529bcead0294154c57ead195cceb4872b0efc7b139ce0a9f8311', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-18 04:54:11.767894 | orchestrator | skipping: [testbed-node-4] => (item={'id': '88fb623366d549564e049193caf54b46e5cb5ed00e3f7d6bbbcf00de825b876a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.767906 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ab19906d5b86f3eab4dfd761c4ce75fd19d538ca4c14c1fd2dee9bfb27e051c2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.767917 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'afebdc8ca309be97f3fd740717a553e8d7b44cf6ca3c947cb6964e30de5b5477', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.767930 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd08b2b07bd3f158d7f17e7291216c0b79f9f434cfa322a784a3fe986c8eacd2d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-18 04:54:11.767941 | orchestrator | ok: [testbed-node-4] => (item={'id': 'aa1d98eff27a2d616df79feb977cb52b56f020ca751abb817a89fb9a1bcdb538', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-18 04:54:11.767953 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e2cdac939030119d57354f0cc10d74bf41ef1568151b90d1b3e219f6abc89bd4', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.767964 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'af175a7ffbc4c3ee265bc6e2c26a32b6390a8b46046ef2fb419675e3977650bf', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-18 04:54:11.767975 | orchestrator | skipping: [testbed-node-4] => (item={'id': '55013bda410c31b9237cf634f8b29ebd481bf90069fce0ef06c339d44f02519d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-18 04:54:11.768003 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1b139a006660301016d4171cdfb2e9a8d1c9971aa559e26a77f88ec4c87a6840', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:11.768100 | orchestrator | skipping: [testbed-node-4] => (item={'id': '73b4c41b1be9dd9182c1dafd1f7b08e34ab0fb7026d5e0886f6c282bab2a42cc', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:11.768117 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e89b541847a5bbb3dd48b30723b0affdcea5e21a4f995e4d2d04f45f4e2593e6', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:11.768129 | orchestrator | skipping: [testbed-node-5] => (item={'id': '09b0060500eb1f77f2ac9f9e5be155a4cdb36c351b9c5ecb30814e8aa235217f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-18 04:54:11.768140 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5ac8ac269c38dea1d7c15f736f5b2dc0f0eb81118571c211230bf34e72e12990', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-18 04:54:11.768157 | orchestrator | skipping: [testbed-node-5] => (item={'id': '61191452b50def83ce25b64fb3c02059b44d4349682f86c6f1fb7df393b0d7b6', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-18 04:54:11.768168 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7b6f3f7007052a019ed0257a992d31f41b2991fb30efc15a069fcc256cba1074', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-18 04:54:11.768179 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3988dd9bece63b41cb1e8e960040b5218afaafbee47c11901f448b6e2c781da2', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-18 04:54:11.768190 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8544dd22a915d307923e914eb3d42872414cb68ba7c7b5ca4d62728e3d0a6b72', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-18 04:54:11.768202 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6adde1577c701b9406709380a1c713d9b2097779db1d47073dc37015d2143982', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-18 04:54:11.768216 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1fea2b09e41551915fab1d77f74939643659ebf9add6427489e669ebe3fa595a', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-18 04:54:11.768229 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7f4e6c2b5979a50752ec2d0a4746297af68cb8f6fa77b8b6a2a0b8d878068cda', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.768242 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04a4f36205c7836cc501d4ff22fb9aa53f9df5605ac90adf8c75bdd13b790c18', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.768254 | orchestrator | skipping: [testbed-node-5] => (item={'id': '56bbccf0f62dd9befc50ab9ae402e768962ef44cdfeb743910b963794814d8ed', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:11.768274 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c92f2e610ba0a0f4b2e2903f66d1a77657b1a3178a8b030a32acf502289c98ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-18 04:54:11.768296 | orchestrator | ok: [testbed-node-5] => (item={'id': '199183dc3d6620e197fec1251f67b8c75de21795903bd0b9900f85ae4ba1e9b5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-18 04:54:23.767247 | orchestrator | skipping: [testbed-node-5] => (item={'id': '55cea7f90a68bf742cb5c86b4a211e9fdb118a7ba3c623de1a2b9b68a739bb13', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-18 04:54:23.767400 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5de3b49c05ab8f38855c31c31a4ef98a7c02e25023498240f052aedb4ed95325', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-18 04:54:23.767429 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad15952a8d750a2451cd8a26617948f3dc5ddefa3fc594c32761fa9bd268553a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-18 04:54:23.767452 | orchestrator | skipping: [testbed-node-5] => (item={'id': '107264d1f2e197aaa4c8ffe50d1c8c250982557f290546d2a1ba5a8c5149c4ee', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:23.767492 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae40935c3e0d70e28237fbb301bf0e7a918b48ba51cf04a15ccfc7d828cee59d', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:23.767511 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4ab213ef9e654af8ca0b4345b35ea6c2a673c795c7aa0ecc30aa86341fac286d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-18 04:54:23.767531 | orchestrator | 2026-02-18 04:54:23.767551 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-18 04:54:23.767571 | orchestrator | Wednesday 18 February 2026 04:54:11 +0000 (0:00:00.498) 0:00:05.677 **** 2026-02-18 04:54:23.767589 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.767609 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:23.767626 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:23.767644 | orchestrator | 2026-02-18 04:54:23.767664 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-18 04:54:23.767683 | orchestrator | Wednesday 18 February 2026 04:54:12 +0000 (0:00:00.331) 0:00:06.009 **** 2026-02-18 04:54:23.767704 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.767724 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:23.767745 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:23.767767 | orchestrator | 2026-02-18 04:54:23.767788 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-18 04:54:23.767810 | orchestrator | Wednesday 18 February 2026 04:54:12 +0000 (0:00:00.552) 0:00:06.562 **** 2026-02-18 04:54:23.767833 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.767856 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:23.767877 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:23.767900 | orchestrator | 2026-02-18 04:54:23.767920 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-18 04:54:23.767940 | orchestrator | Wednesday 18 February 2026 04:54:12 +0000 (0:00:00.334) 0:00:06.897 **** 2026-02-18 04:54:23.767960 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.767980 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:23.768070 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:23.768094 | orchestrator | 2026-02-18 04:54:23.768116 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-18 04:54:23.768138 | orchestrator | Wednesday 18 February 2026 04:54:13 +0000 (0:00:00.317) 0:00:07.214 **** 2026-02-18 04:54:23.768158 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-18 04:54:23.768180 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-18 04:54:23.768199 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.768219 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-18 04:54:23.768239 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-18 04:54:23.768258 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:23.768278 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-18 04:54:23.768296 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-18 04:54:23.768316 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:23.768335 | orchestrator | 2026-02-18 04:54:23.768353 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-18 04:54:23.768372 | orchestrator | Wednesday 18 February 2026 04:54:13 +0000 (0:00:00.377) 0:00:07.592 **** 2026-02-18 04:54:23.768391 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.768411 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:23.768430 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:23.768450 | orchestrator | 2026-02-18 04:54:23.768470 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-18 04:54:23.768489 | orchestrator | Wednesday 18 February 2026 04:54:14 +0000 (0:00:00.538) 0:00:08.130 **** 2026-02-18 04:54:23.768509 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.768564 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:23.768588 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:23.768610 | orchestrator | 2026-02-18 04:54:23.768631 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-18 04:54:23.768652 | orchestrator | Wednesday 18 February 2026 04:54:14 +0000 (0:00:00.338) 0:00:08.469 **** 2026-02-18 04:54:23.768673 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.768693 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:23.768712 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:23.768732 | orchestrator | 2026-02-18 04:54:23.768752 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-18 04:54:23.768772 | orchestrator | Wednesday 18 February 2026 04:54:14 +0000 (0:00:00.313) 0:00:08.782 **** 2026-02-18 04:54:23.768792 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.768811 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:23.768825 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:23.768836 | orchestrator | 2026-02-18 04:54:23.768847 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-18 04:54:23.768858 | orchestrator | Wednesday 18 February 2026 04:54:15 +0000 (0:00:00.392) 0:00:09.175 **** 2026-02-18 04:54:23.768869 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.768880 | orchestrator | 2026-02-18 04:54:23.768890 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-18 04:54:23.768901 | orchestrator | Wednesday 18 February 2026 04:54:16 +0000 (0:00:00.789) 0:00:09.965 **** 2026-02-18 04:54:23.768912 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.768923 | orchestrator | 2026-02-18 04:54:23.768933 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-18 04:54:23.768945 | orchestrator | Wednesday 18 February 2026 04:54:16 +0000 (0:00:00.330) 0:00:10.295 **** 2026-02-18 04:54:23.768955 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.768966 | orchestrator | 2026-02-18 04:54:23.768977 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:54:23.769004 | orchestrator | Wednesday 18 February 2026 04:54:16 +0000 (0:00:00.273) 0:00:10.568 **** 2026-02-18 04:54:23.769067 | orchestrator | 2026-02-18 04:54:23.769088 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:54:23.769106 | orchestrator | Wednesday 18 February 2026 04:54:16 +0000 (0:00:00.070) 0:00:10.639 **** 2026-02-18 04:54:23.769123 | orchestrator | 2026-02-18 04:54:23.769134 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:54:23.769145 | orchestrator | Wednesday 18 February 2026 04:54:16 +0000 (0:00:00.076) 0:00:10.715 **** 2026-02-18 04:54:23.769156 | orchestrator | 2026-02-18 04:54:23.769166 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-18 04:54:23.769178 | orchestrator | Wednesday 18 February 2026 04:54:16 +0000 (0:00:00.078) 0:00:10.794 **** 2026-02-18 04:54:23.769188 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.769199 | orchestrator | 2026-02-18 04:54:23.769209 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-18 04:54:23.769220 | orchestrator | Wednesday 18 February 2026 04:54:17 +0000 (0:00:00.275) 0:00:11.069 **** 2026-02-18 04:54:23.769231 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.769242 | orchestrator | 2026-02-18 04:54:23.769252 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-18 04:54:23.769263 | orchestrator | Wednesday 18 February 2026 04:54:17 +0000 (0:00:00.276) 0:00:11.345 **** 2026-02-18 04:54:23.769273 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.769284 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:23.769294 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:23.769305 | orchestrator | 2026-02-18 04:54:23.769316 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-18 04:54:23.769327 | orchestrator | Wednesday 18 February 2026 04:54:17 +0000 (0:00:00.328) 0:00:11.674 **** 2026-02-18 04:54:23.769337 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.769348 | orchestrator | 2026-02-18 04:54:23.769358 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-18 04:54:23.769369 | orchestrator | Wednesday 18 February 2026 04:54:18 +0000 (0:00:00.772) 0:00:12.446 **** 2026-02-18 04:54:23.769380 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 04:54:23.769391 | orchestrator | 2026-02-18 04:54:23.769401 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-18 04:54:23.769412 | orchestrator | Wednesday 18 February 2026 04:54:20 +0000 (0:00:01.581) 0:00:14.028 **** 2026-02-18 04:54:23.769423 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.769433 | orchestrator | 2026-02-18 04:54:23.769444 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-18 04:54:23.769455 | orchestrator | Wednesday 18 February 2026 04:54:20 +0000 (0:00:00.159) 0:00:14.187 **** 2026-02-18 04:54:23.769465 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.769476 | orchestrator | 2026-02-18 04:54:23.769487 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-18 04:54:23.769497 | orchestrator | Wednesday 18 February 2026 04:54:20 +0000 (0:00:00.332) 0:00:14.520 **** 2026-02-18 04:54:23.769508 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:23.769519 | orchestrator | 2026-02-18 04:54:23.769529 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-18 04:54:23.769540 | orchestrator | Wednesday 18 February 2026 04:54:20 +0000 (0:00:00.128) 0:00:14.648 **** 2026-02-18 04:54:23.769551 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.769561 | orchestrator | 2026-02-18 04:54:23.769572 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-18 04:54:23.769583 | orchestrator | Wednesday 18 February 2026 04:54:20 +0000 (0:00:00.166) 0:00:14.814 **** 2026-02-18 04:54:23.769592 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:23.769602 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:23.769611 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:23.769630 | orchestrator | 2026-02-18 04:54:23.769640 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-18 04:54:23.769649 | orchestrator | Wednesday 18 February 2026 04:54:21 +0000 (0:00:00.367) 0:00:15.182 **** 2026-02-18 04:54:23.769659 | orchestrator | changed: [testbed-node-3] 2026-02-18 04:54:23.769668 | orchestrator | changed: [testbed-node-4] 2026-02-18 04:54:23.769678 | orchestrator | changed: [testbed-node-5] 2026-02-18 04:54:34.728806 | orchestrator | 2026-02-18 04:54:34.728983 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-18 04:54:34.729002 | orchestrator | Wednesday 18 February 2026 04:54:23 +0000 (0:00:02.521) 0:00:17.703 **** 2026-02-18 04:54:34.729078 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:34.729092 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:34.729103 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:34.729114 | orchestrator | 2026-02-18 04:54:34.729126 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-18 04:54:34.729138 | orchestrator | Wednesday 18 February 2026 04:54:24 +0000 (0:00:00.381) 0:00:18.085 **** 2026-02-18 04:54:34.729149 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:34.729160 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:34.729172 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:34.729183 | orchestrator | 2026-02-18 04:54:34.729194 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-18 04:54:34.729206 | orchestrator | Wednesday 18 February 2026 04:54:24 +0000 (0:00:00.564) 0:00:18.649 **** 2026-02-18 04:54:34.729217 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:34.729229 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:34.729240 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:34.729251 | orchestrator | 2026-02-18 04:54:34.729262 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-18 04:54:34.729274 | orchestrator | Wednesday 18 February 2026 04:54:25 +0000 (0:00:00.401) 0:00:19.050 **** 2026-02-18 04:54:34.729285 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:34.729296 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:34.729309 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:34.729321 | orchestrator | 2026-02-18 04:54:34.729334 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-18 04:54:34.729353 | orchestrator | Wednesday 18 February 2026 04:54:25 +0000 (0:00:00.642) 0:00:19.693 **** 2026-02-18 04:54:34.729365 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:34.729378 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:34.729390 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:34.729403 | orchestrator | 2026-02-18 04:54:34.729416 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-18 04:54:34.729430 | orchestrator | Wednesday 18 February 2026 04:54:26 +0000 (0:00:00.342) 0:00:20.036 **** 2026-02-18 04:54:34.729443 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:34.729455 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:34.729467 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:34.729479 | orchestrator | 2026-02-18 04:54:34.729492 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-18 04:54:34.729504 | orchestrator | Wednesday 18 February 2026 04:54:26 +0000 (0:00:00.347) 0:00:20.383 **** 2026-02-18 04:54:34.729517 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:34.729529 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:34.729542 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:34.729554 | orchestrator | 2026-02-18 04:54:34.729566 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-18 04:54:34.729579 | orchestrator | Wednesday 18 February 2026 04:54:26 +0000 (0:00:00.532) 0:00:20.916 **** 2026-02-18 04:54:34.729591 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:34.729603 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:34.729615 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:34.729627 | orchestrator | 2026-02-18 04:54:34.729640 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-18 04:54:34.729678 | orchestrator | Wednesday 18 February 2026 04:54:27 +0000 (0:00:00.823) 0:00:21.739 **** 2026-02-18 04:54:34.729690 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:34.729701 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:34.729712 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:34.729723 | orchestrator | 2026-02-18 04:54:34.729734 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-18 04:54:34.729745 | orchestrator | Wednesday 18 February 2026 04:54:28 +0000 (0:00:00.330) 0:00:22.070 **** 2026-02-18 04:54:34.729755 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:34.729767 | orchestrator | skipping: [testbed-node-4] 2026-02-18 04:54:34.729777 | orchestrator | skipping: [testbed-node-5] 2026-02-18 04:54:34.729788 | orchestrator | 2026-02-18 04:54:34.729799 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-18 04:54:34.729810 | orchestrator | Wednesday 18 February 2026 04:54:28 +0000 (0:00:00.306) 0:00:22.376 **** 2026-02-18 04:54:34.729821 | orchestrator | ok: [testbed-node-3] 2026-02-18 04:54:34.729832 | orchestrator | ok: [testbed-node-4] 2026-02-18 04:54:34.729843 | orchestrator | ok: [testbed-node-5] 2026-02-18 04:54:34.729853 | orchestrator | 2026-02-18 04:54:34.729864 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-18 04:54:34.729875 | orchestrator | Wednesday 18 February 2026 04:54:29 +0000 (0:00:00.600) 0:00:22.976 **** 2026-02-18 04:54:34.729886 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:34.729897 | orchestrator | 2026-02-18 04:54:34.729908 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-18 04:54:34.729919 | orchestrator | Wednesday 18 February 2026 04:54:29 +0000 (0:00:00.305) 0:00:23.282 **** 2026-02-18 04:54:34.729929 | orchestrator | skipping: [testbed-node-3] 2026-02-18 04:54:34.729940 | orchestrator | 2026-02-18 04:54:34.729951 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-18 04:54:34.729962 | orchestrator | Wednesday 18 February 2026 04:54:29 +0000 (0:00:00.259) 0:00:23.541 **** 2026-02-18 04:54:34.729973 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:34.729983 | orchestrator | 2026-02-18 04:54:34.729994 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-18 04:54:34.730095 | orchestrator | Wednesday 18 February 2026 04:54:31 +0000 (0:00:01.756) 0:00:25.297 **** 2026-02-18 04:54:34.730111 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:34.730122 | orchestrator | 2026-02-18 04:54:34.730134 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-18 04:54:34.730154 | orchestrator | Wednesday 18 February 2026 04:54:31 +0000 (0:00:00.291) 0:00:25.589 **** 2026-02-18 04:54:34.730166 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:34.730176 | orchestrator | 2026-02-18 04:54:34.730211 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:54:34.730223 | orchestrator | Wednesday 18 February 2026 04:54:31 +0000 (0:00:00.301) 0:00:25.891 **** 2026-02-18 04:54:34.730234 | orchestrator | 2026-02-18 04:54:34.730245 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:54:34.730255 | orchestrator | Wednesday 18 February 2026 04:54:32 +0000 (0:00:00.075) 0:00:25.966 **** 2026-02-18 04:54:34.730266 | orchestrator | 2026-02-18 04:54:34.730277 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-18 04:54:34.730288 | orchestrator | Wednesday 18 February 2026 04:54:32 +0000 (0:00:00.075) 0:00:26.042 **** 2026-02-18 04:54:34.730299 | orchestrator | 2026-02-18 04:54:34.730310 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-18 04:54:34.730321 | orchestrator | Wednesday 18 February 2026 04:54:32 +0000 (0:00:00.077) 0:00:26.120 **** 2026-02-18 04:54:34.730331 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-18 04:54:34.730342 | orchestrator | 2026-02-18 04:54:34.730353 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-18 04:54:34.730375 | orchestrator | Wednesday 18 February 2026 04:54:33 +0000 (0:00:01.576) 0:00:27.696 **** 2026-02-18 04:54:34.730386 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-18 04:54:34.730397 | orchestrator |  "msg": [ 2026-02-18 04:54:34.730408 | orchestrator |  "Validator run completed.", 2026-02-18 04:54:34.730419 | orchestrator |  "You can find the report file here:", 2026-02-18 04:54:34.730430 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-18T04:54:07+00:00-report.json", 2026-02-18 04:54:34.730449 | orchestrator |  "on the following host:", 2026-02-18 04:54:34.730460 | orchestrator |  "testbed-manager" 2026-02-18 04:54:34.730471 | orchestrator |  ] 2026-02-18 04:54:34.730483 | orchestrator | } 2026-02-18 04:54:34.730494 | orchestrator | 2026-02-18 04:54:34.730505 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:54:34.730518 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 04:54:34.730531 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-18 04:54:34.730542 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-18 04:54:34.730553 | orchestrator | 2026-02-18 04:54:34.730564 | orchestrator | 2026-02-18 04:54:34.730575 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:54:34.730586 | orchestrator | Wednesday 18 February 2026 04:54:34 +0000 (0:00:00.638) 0:00:28.335 **** 2026-02-18 04:54:34.730597 | orchestrator | =============================================================================== 2026-02-18 04:54:34.730608 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.52s 2026-02-18 04:54:34.730618 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-02-18 04:54:34.730629 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.58s 2026-02-18 04:54:34.730640 | orchestrator | Write report file ------------------------------------------------------- 1.58s 2026-02-18 04:54:34.730651 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.95s 2026-02-18 04:54:34.730662 | orchestrator | Get timestamp for report file ------------------------------------------- 0.93s 2026-02-18 04:54:34.730673 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.82s 2026-02-18 04:54:34.730683 | orchestrator | Aggregate test results step one ----------------------------------------- 0.79s 2026-02-18 04:54:34.730694 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.77s 2026-02-18 04:54:34.730705 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2026-02-18 04:54:34.730716 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.64s 2026-02-18 04:54:34.730727 | orchestrator | Print report file information ------------------------------------------- 0.64s 2026-02-18 04:54:34.730737 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.60s 2026-02-18 04:54:34.730748 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.60s 2026-02-18 04:54:34.730759 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.56s 2026-02-18 04:54:34.730770 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.55s 2026-02-18 04:54:34.730781 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.54s 2026-02-18 04:54:34.730791 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2026-02-18 04:54:34.730802 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2026-02-18 04:54:34.730813 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.40s 2026-02-18 04:54:35.087797 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-18 04:54:35.096620 | orchestrator | + set -e 2026-02-18 04:54:35.096695 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 04:54:35.096709 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 04:54:35.096720 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 04:54:35.096731 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 04:54:35.096742 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 04:54:35.096753 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 04:54:35.096766 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 04:54:35.096777 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 04:54:35.096788 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 04:54:35.096799 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 04:54:35.096996 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 04:54:35.097096 | orchestrator | ++ export ARA=false 2026-02-18 04:54:35.097109 | orchestrator | ++ ARA=false 2026-02-18 04:54:35.097120 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 04:54:35.097131 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 04:54:35.097142 | orchestrator | ++ export TEMPEST=false 2026-02-18 04:54:35.097153 | orchestrator | ++ TEMPEST=false 2026-02-18 04:54:35.097164 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 04:54:35.097175 | orchestrator | ++ IS_ZUUL=true 2026-02-18 04:54:35.097186 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 04:54:35.097197 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 04:54:35.097208 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 04:54:35.097219 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 04:54:35.097230 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 04:54:35.097241 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 04:54:35.097252 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 04:54:35.097263 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 04:54:35.097274 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 04:54:35.097285 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 04:54:35.097296 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-18 04:54:35.097306 | orchestrator | + source /etc/os-release 2026-02-18 04:54:35.097317 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-18 04:54:35.097339 | orchestrator | ++ NAME=Ubuntu 2026-02-18 04:54:35.097350 | orchestrator | ++ VERSION_ID=24.04 2026-02-18 04:54:35.097361 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-18 04:54:35.097372 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-18 04:54:35.097383 | orchestrator | ++ ID=ubuntu 2026-02-18 04:54:35.097394 | orchestrator | ++ ID_LIKE=debian 2026-02-18 04:54:35.097404 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-18 04:54:35.097415 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-18 04:54:35.097426 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-18 04:54:35.097438 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-18 04:54:35.097449 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-18 04:54:35.097460 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-18 04:54:35.097471 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-18 04:54:35.097483 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-18 04:54:35.097496 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-18 04:54:35.129392 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-18 04:54:58.863209 | orchestrator | 2026-02-18 04:54:58.863353 | orchestrator | # Status of Elasticsearch 2026-02-18 04:54:58.863369 | orchestrator | 2026-02-18 04:54:58.863381 | orchestrator | + pushd /opt/configuration/contrib 2026-02-18 04:54:58.863392 | orchestrator | + echo 2026-02-18 04:54:58.863403 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-18 04:54:58.863413 | orchestrator | + echo 2026-02-18 04:54:58.863423 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-18 04:54:59.115473 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-18 04:54:59.115688 | orchestrator | 2026-02-18 04:54:59.115705 | orchestrator | # Status of MariaDB 2026-02-18 04:54:59.115718 | orchestrator | 2026-02-18 04:54:59.115728 | orchestrator | + echo 2026-02-18 04:54:59.115771 | orchestrator | + echo '# Status of MariaDB' 2026-02-18 04:54:59.115781 | orchestrator | + echo 2026-02-18 04:54:59.115804 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-18 04:54:59.170624 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-18 04:54:59.170735 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-18 04:54:59.170747 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-18 04:54:59.170759 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-18 04:54:59.226930 | orchestrator | Reading package lists... 2026-02-18 04:54:59.578606 | orchestrator | Building dependency tree... 2026-02-18 04:54:59.578975 | orchestrator | Reading state information... 2026-02-18 04:54:59.932234 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-18 04:54:59.932329 | orchestrator | bc set to manually installed. 2026-02-18 04:54:59.932343 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2026-02-18 04:55:00.650751 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-18 04:55:00.651854 | orchestrator | 2026-02-18 04:55:00.651916 | orchestrator | # Status of Prometheus 2026-02-18 04:55:00.651937 | orchestrator | 2026-02-18 04:55:00.651955 | orchestrator | + echo 2026-02-18 04:55:00.651974 | orchestrator | + echo '# Status of Prometheus' 2026-02-18 04:55:00.652049 | orchestrator | + echo 2026-02-18 04:55:00.652073 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-18 04:55:00.705307 | orchestrator | Unauthorized 2026-02-18 04:55:00.709923 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-18 04:55:00.767777 | orchestrator | Unauthorized 2026-02-18 04:55:00.776127 | orchestrator | 2026-02-18 04:55:00.776189 | orchestrator | # Status of RabbitMQ 2026-02-18 04:55:00.776195 | orchestrator | 2026-02-18 04:55:00.776200 | orchestrator | + echo 2026-02-18 04:55:00.776204 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-18 04:55:00.776208 | orchestrator | + echo 2026-02-18 04:55:00.777141 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-18 04:55:00.827036 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-18 04:55:00.827100 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-18 04:55:00.827108 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-18 04:55:01.314782 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-18 04:55:01.325186 | orchestrator | 2026-02-18 04:55:01.325274 | orchestrator | # Status of Redis 2026-02-18 04:55:01.325288 | orchestrator | 2026-02-18 04:55:01.325300 | orchestrator | + echo 2026-02-18 04:55:01.325311 | orchestrator | + echo '# Status of Redis' 2026-02-18 04:55:01.325323 | orchestrator | + echo 2026-02-18 04:55:01.325335 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-18 04:55:01.332649 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001915s;;;0.000000;10.000000 2026-02-18 04:55:01.333135 | orchestrator | 2026-02-18 04:55:01.333167 | orchestrator | # Create backup of MariaDB database 2026-02-18 04:55:01.333181 | orchestrator | 2026-02-18 04:55:01.333194 | orchestrator | + popd 2026-02-18 04:55:01.333206 | orchestrator | + echo 2026-02-18 04:55:01.333218 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-18 04:55:01.333229 | orchestrator | + echo 2026-02-18 04:55:01.333240 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-18 04:55:03.508324 | orchestrator | 2026-02-18 04:55:03 | INFO  | Task 7b95ebb3-5505-49b6-9761-ae8ef4aec52e (mariadb_backup) was prepared for execution. 2026-02-18 04:55:03.508405 | orchestrator | 2026-02-18 04:55:03 | INFO  | It takes a moment until task 7b95ebb3-5505-49b6-9761-ae8ef4aec52e (mariadb_backup) has been started and output is visible here. 2026-02-18 04:58:02.371187 | orchestrator | 2026-02-18 04:58:02.371328 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 04:58:02.371346 | orchestrator | 2026-02-18 04:58:02.371359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 04:58:02.371385 | orchestrator | Wednesday 18 February 2026 04:55:07 +0000 (0:00:00.189) 0:00:00.189 **** 2026-02-18 04:58:02.371435 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:58:02.371450 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:58:02.371461 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:58:02.371473 | orchestrator | 2026-02-18 04:58:02.371506 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 04:58:02.371518 | orchestrator | Wednesday 18 February 2026 04:55:08 +0000 (0:00:00.328) 0:00:00.517 **** 2026-02-18 04:58:02.371530 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-18 04:58:02.371541 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-18 04:58:02.371552 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-18 04:58:02.371563 | orchestrator | 2026-02-18 04:58:02.371574 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-18 04:58:02.371585 | orchestrator | 2026-02-18 04:58:02.371596 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-18 04:58:02.371607 | orchestrator | Wednesday 18 February 2026 04:55:08 +0000 (0:00:00.606) 0:00:01.124 **** 2026-02-18 04:58:02.371618 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 04:58:02.371629 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 04:58:02.371640 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 04:58:02.371651 | orchestrator | 2026-02-18 04:58:02.371662 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 04:58:02.371673 | orchestrator | Wednesday 18 February 2026 04:55:09 +0000 (0:00:00.436) 0:00:01.561 **** 2026-02-18 04:58:02.371685 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 04:58:02.371698 | orchestrator | 2026-02-18 04:58:02.371709 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-18 04:58:02.371737 | orchestrator | Wednesday 18 February 2026 04:55:09 +0000 (0:00:00.561) 0:00:02.123 **** 2026-02-18 04:58:02.371751 | orchestrator | ok: [testbed-node-1] 2026-02-18 04:58:02.371764 | orchestrator | ok: [testbed-node-0] 2026-02-18 04:58:02.371776 | orchestrator | ok: [testbed-node-2] 2026-02-18 04:58:02.371789 | orchestrator | 2026-02-18 04:58:02.371803 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-18 04:58:02.371817 | orchestrator | Wednesday 18 February 2026 04:55:13 +0000 (0:00:03.493) 0:00:05.616 **** 2026-02-18 04:58:02.371829 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:58:02.371843 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:58:02.371856 | orchestrator | 2026-02-18 04:58:02.371868 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-18 04:58:02.371882 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-18 04:58:02.371894 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-18 04:58:02.371908 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-18 04:58:02.371921 | orchestrator | mariadb_bootstrap_restart 2026-02-18 04:58:02.371933 | orchestrator | changed: [testbed-node-0] 2026-02-18 04:58:02.371975 | orchestrator | 2026-02-18 04:58:02.371988 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-18 04:58:02.371999 | orchestrator | skipping: no hosts matched 2026-02-18 04:58:02.372010 | orchestrator | 2026-02-18 04:58:02.372020 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-18 04:58:02.372031 | orchestrator | skipping: no hosts matched 2026-02-18 04:58:02.372042 | orchestrator | 2026-02-18 04:58:02.372052 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-18 04:58:02.372063 | orchestrator | skipping: no hosts matched 2026-02-18 04:58:02.372074 | orchestrator | 2026-02-18 04:58:02.372085 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-18 04:58:02.372095 | orchestrator | 2026-02-18 04:58:02.372106 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-18 04:58:02.372117 | orchestrator | Wednesday 18 February 2026 04:58:01 +0000 (0:02:47.735) 0:02:53.351 **** 2026-02-18 04:58:02.372127 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:58:02.372138 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:58:02.372158 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:58:02.372169 | orchestrator | 2026-02-18 04:58:02.372180 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-18 04:58:02.372190 | orchestrator | Wednesday 18 February 2026 04:58:01 +0000 (0:00:00.336) 0:02:53.688 **** 2026-02-18 04:58:02.372201 | orchestrator | skipping: [testbed-node-0] 2026-02-18 04:58:02.372212 | orchestrator | skipping: [testbed-node-1] 2026-02-18 04:58:02.372223 | orchestrator | skipping: [testbed-node-2] 2026-02-18 04:58:02.372233 | orchestrator | 2026-02-18 04:58:02.372244 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 04:58:02.372256 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 04:58:02.372268 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 04:58:02.372279 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 04:58:02.372290 | orchestrator | 2026-02-18 04:58:02.372300 | orchestrator | 2026-02-18 04:58:02.372311 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 04:58:02.372322 | orchestrator | Wednesday 18 February 2026 04:58:01 +0000 (0:00:00.504) 0:02:54.192 **** 2026-02-18 04:58:02.372333 | orchestrator | =============================================================================== 2026-02-18 04:58:02.372361 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 167.74s 2026-02-18 04:58:02.372373 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.49s 2026-02-18 04:58:02.372384 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-02-18 04:58:02.372395 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-02-18 04:58:02.372406 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.50s 2026-02-18 04:58:02.372416 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.44s 2026-02-18 04:58:02.372427 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.34s 2026-02-18 04:58:02.372438 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-02-18 04:58:02.815853 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-18 04:58:02.827825 | orchestrator | + set -e 2026-02-18 04:58:02.827893 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 04:58:02.827906 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 04:58:02.827918 | orchestrator | ++ INTERACTIVE=false 2026-02-18 04:58:02.827928 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 04:58:02.827939 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 04:58:02.827986 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-18 04:58:02.829367 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-18 04:58:02.836879 | orchestrator | 2026-02-18 04:58:02.836936 | orchestrator | # OpenStack endpoints 2026-02-18 04:58:02.836980 | orchestrator | 2026-02-18 04:58:02.836992 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 04:58:02.837004 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 04:58:02.837015 | orchestrator | + export OS_CLOUD=admin 2026-02-18 04:58:02.837026 | orchestrator | + OS_CLOUD=admin 2026-02-18 04:58:02.837037 | orchestrator | + echo 2026-02-18 04:58:02.837048 | orchestrator | + echo '# OpenStack endpoints' 2026-02-18 04:58:02.837058 | orchestrator | + echo 2026-02-18 04:58:02.837069 | orchestrator | + openstack endpoint list 2026-02-18 04:58:06.150337 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-18 04:58:06.150439 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-18 04:58:06.150453 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-18 04:58:06.150489 | orchestrator | | 04ae676c28964f1aacc603c6ad8050e6 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-18 04:58:06.150518 | orchestrator | | 0886537e889d436c841cfb0556c79444 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-18 04:58:06.150531 | orchestrator | | 09f2e49aacb64543822c852027d39369 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-18 04:58:06.150542 | orchestrator | | 0a33de6e9f1743f0af98b376216c312e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-18 04:58:06.150553 | orchestrator | | 0b4d76b60ae74a1c9cb749298178e736 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-18 04:58:06.150564 | orchestrator | | 1c8fc020a04949e08fe7b889457fdc7b | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-18 04:58:06.150575 | orchestrator | | 1e3b1dc4cb0147399925c78c07f46658 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-18 04:58:06.150586 | orchestrator | | 2851eff785794b389291babd1d04c545 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-18 04:58:06.150597 | orchestrator | | 318a2dc2eeb940df8c618a968ed7c149 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-18 04:58:06.150608 | orchestrator | | 3eff2b3f68684093923de0ce6d563a37 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-18 04:58:06.150619 | orchestrator | | 4e3c0a8713f249c6a0f2c0d8661c919d | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-18 04:58:06.150630 | orchestrator | | 4efdbfbcd78844208e0aa71274d42ea5 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-18 04:58:06.150641 | orchestrator | | 5f9b7e58f5f74705b9c2372f8ebe268a | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-18 04:58:06.150651 | orchestrator | | 74f494c955e048298ffc4a8510e9ff8e | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-18 04:58:06.150662 | orchestrator | | 77f9a9f88b9947f7829bcf6dddd8e047 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-18 04:58:06.150673 | orchestrator | | 860b392d9f9c406ea5bdff507a3b64da | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-18 04:58:06.150684 | orchestrator | | 8efb106e759f45d9a1b7eb998237d789 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-18 04:58:06.150695 | orchestrator | | 999d7308234a4b54b333f0ea9aa05de7 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-18 04:58:06.150706 | orchestrator | | a05e7f97bfb747119b6c878e8001ffd3 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-18 04:58:06.150717 | orchestrator | | a4292e3c47bc4195b52863d7be5f30dd | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-18 04:58:06.150752 | orchestrator | | a5952bb241644889a1ba0af0949bae3f | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-18 04:58:06.150763 | orchestrator | | a6a1f80bb3e34727bfe002eecdcec315 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-18 04:58:06.150779 | orchestrator | | b7e3ac123e364a918360f0830ece9823 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-18 04:58:06.150790 | orchestrator | | b981749cfdc049f48517d92417e6e0e2 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-18 04:58:06.150801 | orchestrator | | d42619c00830488083ba8b7c129d556a | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-18 04:58:06.150812 | orchestrator | | e0f4ece7ddab49038648fe0761d06356 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-18 04:58:06.150823 | orchestrator | | f12c4f5471be41d0b064504e27cc6b47 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-18 04:58:06.150833 | orchestrator | | f1a9be0a2af748e18079ce75f46c4433 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-18 04:58:06.150844 | orchestrator | | fd42464bd0eb4bb8bf1ab1b00a91aff6 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-18 04:58:06.150855 | orchestrator | | ffda5f493ab54734bc9a10e0b9392873 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-18 04:58:06.150866 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-18 04:58:06.446725 | orchestrator | 2026-02-18 04:58:06.446826 | orchestrator | # Cinder 2026-02-18 04:58:06.446858 | orchestrator | 2026-02-18 04:58:06.446870 | orchestrator | + echo 2026-02-18 04:58:06.446890 | orchestrator | + echo '# Cinder' 2026-02-18 04:58:06.446900 | orchestrator | + echo 2026-02-18 04:58:06.446910 | orchestrator | + openstack volume service list 2026-02-18 04:58:09.226662 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-18 04:58:09.226768 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-18 04:58:09.226783 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-18 04:58:09.226794 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-18T04:58:05.000000 | 2026-02-18 04:58:09.226805 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-18T04:58:05.000000 | 2026-02-18 04:58:09.226816 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-18T04:58:05.000000 | 2026-02-18 04:58:09.226826 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-18T04:58:05.000000 | 2026-02-18 04:58:09.226837 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-18T04:58:01.000000 | 2026-02-18 04:58:09.226848 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-18T04:58:02.000000 | 2026-02-18 04:58:09.226859 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-18T04:57:59.000000 | 2026-02-18 04:58:09.226870 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-18T04:58:01.000000 | 2026-02-18 04:58:09.226909 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-18T04:58:01.000000 | 2026-02-18 04:58:09.226921 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-18 04:58:09.580718 | orchestrator | 2026-02-18 04:58:09.580808 | orchestrator | # Neutron 2026-02-18 04:58:09.580821 | orchestrator | 2026-02-18 04:58:09.580833 | orchestrator | + echo 2026-02-18 04:58:09.580844 | orchestrator | + echo '# Neutron' 2026-02-18 04:58:09.580854 | orchestrator | + echo 2026-02-18 04:58:09.580864 | orchestrator | + openstack network agent list 2026-02-18 04:58:12.370578 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-18 04:58:12.370705 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-18 04:58:12.370726 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-18 04:58:12.370738 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-18 04:58:12.370749 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-18 04:58:12.370760 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-18 04:58:12.370789 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-18 04:58:12.370801 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-18 04:58:12.370812 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-18 04:58:12.370822 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-18 04:58:12.370833 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-18 04:58:12.370844 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-18 04:58:12.370855 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-18 04:58:12.693934 | orchestrator | + openstack network service provider list 2026-02-18 04:58:15.266354 | orchestrator | +---------------+------+---------+ 2026-02-18 04:58:15.266464 | orchestrator | | Service Type | Name | Default | 2026-02-18 04:58:15.266479 | orchestrator | +---------------+------+---------+ 2026-02-18 04:58:15.266491 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-18 04:58:15.266502 | orchestrator | +---------------+------+---------+ 2026-02-18 04:58:15.556131 | orchestrator | 2026-02-18 04:58:15.556240 | orchestrator | # Nova 2026-02-18 04:58:15.556256 | orchestrator | 2026-02-18 04:58:15.556268 | orchestrator | + echo 2026-02-18 04:58:15.556288 | orchestrator | + echo '# Nova' 2026-02-18 04:58:15.556308 | orchestrator | + echo 2026-02-18 04:58:15.556328 | orchestrator | + openstack compute service list 2026-02-18 04:58:18.690501 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-18 04:58:18.690649 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-18 04:58:18.690667 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-18 04:58:18.691724 | orchestrator | | 410b78db-d928-4d6a-9122-f2e95263ee79 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-18T04:58:18.000000 | 2026-02-18 04:58:18.691813 | orchestrator | | df6808f1-5450-43c2-8b7d-c05d4c71f6ef | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-18T04:58:11.000000 | 2026-02-18 04:58:18.691837 | orchestrator | | 69896054-8256-43b4-b38e-a2d33ed78f65 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-18T04:58:12.000000 | 2026-02-18 04:58:18.691859 | orchestrator | | 89a06411-fb6e-4f90-8ec1-9ef4f9e7866f | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-18T04:58:09.000000 | 2026-02-18 04:58:18.691879 | orchestrator | | 107bfabb-aa95-4a86-82b3-d9d2fe5cf760 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-18T04:58:10.000000 | 2026-02-18 04:58:18.691898 | orchestrator | | 5babe12c-c1e0-4e71-b1d2-3748e6fe050b | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-18T04:58:11.000000 | 2026-02-18 04:58:18.691918 | orchestrator | | 62895caf-5871-4dc6-86ab-78d402fc7cff | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-18T04:58:15.000000 | 2026-02-18 04:58:18.691939 | orchestrator | | 08d7bf65-0093-44be-a1e8-5ec677ea5565 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-18T04:58:15.000000 | 2026-02-18 04:58:18.692010 | orchestrator | | 2731f823-28e0-475b-a467-35ea378b44e7 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-18T04:58:15.000000 | 2026-02-18 04:58:18.692029 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-18 04:58:19.009622 | orchestrator | + openstack hypervisor list 2026-02-18 04:58:21.790283 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-18 04:58:21.790390 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-18 04:58:21.790406 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-18 04:58:21.790418 | orchestrator | | 26ea0e2a-6351-4671-afa6-7006b910e44f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-18 04:58:21.790429 | orchestrator | | e1ad72a8-f3ce-42c1-beae-170beb0e2c1b | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-18 04:58:21.790440 | orchestrator | | 76930594-3f50-4fbb-b596-d074b2b84876 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-18 04:58:21.790451 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-18 04:58:22.095377 | orchestrator | 2026-02-18 04:58:22.095476 | orchestrator | # Run OpenStack test play 2026-02-18 04:58:22.095491 | orchestrator | 2026-02-18 04:58:22.095503 | orchestrator | + echo 2026-02-18 04:58:22.095515 | orchestrator | + echo '# Run OpenStack test play' 2026-02-18 04:58:22.095526 | orchestrator | + echo 2026-02-18 04:58:22.095538 | orchestrator | + osism apply --environment openstack test 2026-02-18 04:58:24.120229 | orchestrator | 2026-02-18 04:58:24 | INFO  | Trying to run play test in environment openstack 2026-02-18 04:58:34.222561 | orchestrator | 2026-02-18 04:58:34 | INFO  | Task 8d1c8ac0-1624-45e8-8a91-2d0db479ac36 (test) was prepared for execution. 2026-02-18 04:58:34.222668 | orchestrator | 2026-02-18 04:58:34 | INFO  | It takes a moment until task 8d1c8ac0-1624-45e8-8a91-2d0db479ac36 (test) has been started and output is visible here. 2026-02-18 05:01:07.495903 | orchestrator | 2026-02-18 05:01:07.496089 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-18 05:01:07.496108 | orchestrator | 2026-02-18 05:01:07.496119 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-18 05:01:07.496130 | orchestrator | Wednesday 18 February 2026 04:58:38 +0000 (0:00:00.073) 0:00:00.073 **** 2026-02-18 05:01:07.496140 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496151 | orchestrator | 2026-02-18 05:01:07.496161 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-18 05:01:07.496171 | orchestrator | Wednesday 18 February 2026 04:58:42 +0000 (0:00:03.762) 0:00:03.836 **** 2026-02-18 05:01:07.496202 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496213 | orchestrator | 2026-02-18 05:01:07.496222 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-18 05:01:07.496232 | orchestrator | Wednesday 18 February 2026 04:58:46 +0000 (0:00:04.100) 0:00:07.936 **** 2026-02-18 05:01:07.496242 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496252 | orchestrator | 2026-02-18 05:01:07.496261 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-18 05:01:07.496271 | orchestrator | Wednesday 18 February 2026 04:58:52 +0000 (0:00:06.404) 0:00:14.340 **** 2026-02-18 05:01:07.496281 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496290 | orchestrator | 2026-02-18 05:01:07.496300 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-18 05:01:07.496310 | orchestrator | Wednesday 18 February 2026 04:58:56 +0000 (0:00:03.977) 0:00:18.318 **** 2026-02-18 05:01:07.496319 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496329 | orchestrator | 2026-02-18 05:01:07.496339 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-18 05:01:07.496349 | orchestrator | Wednesday 18 February 2026 04:59:00 +0000 (0:00:04.122) 0:00:22.441 **** 2026-02-18 05:01:07.496359 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-18 05:01:07.496369 | orchestrator | changed: [localhost] => (item=member) 2026-02-18 05:01:07.496379 | orchestrator | changed: [localhost] => (item=creator) 2026-02-18 05:01:07.496389 | orchestrator | 2026-02-18 05:01:07.496399 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-18 05:01:07.496409 | orchestrator | Wednesday 18 February 2026 04:59:12 +0000 (0:00:11.804) 0:00:34.245 **** 2026-02-18 05:01:07.496418 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496428 | orchestrator | 2026-02-18 05:01:07.496438 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-18 05:01:07.496447 | orchestrator | Wednesday 18 February 2026 04:59:17 +0000 (0:00:04.271) 0:00:38.517 **** 2026-02-18 05:01:07.496457 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496466 | orchestrator | 2026-02-18 05:01:07.496476 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-18 05:01:07.496485 | orchestrator | Wednesday 18 February 2026 04:59:21 +0000 (0:00:04.691) 0:00:43.209 **** 2026-02-18 05:01:07.496495 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496505 | orchestrator | 2026-02-18 05:01:07.496514 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-18 05:01:07.496524 | orchestrator | Wednesday 18 February 2026 04:59:26 +0000 (0:00:04.303) 0:00:47.512 **** 2026-02-18 05:01:07.496534 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496544 | orchestrator | 2026-02-18 05:01:07.496553 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-18 05:01:07.496563 | orchestrator | Wednesday 18 February 2026 04:59:29 +0000 (0:00:03.909) 0:00:51.421 **** 2026-02-18 05:01:07.496572 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496582 | orchestrator | 2026-02-18 05:01:07.496592 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-18 05:01:07.496601 | orchestrator | Wednesday 18 February 2026 04:59:34 +0000 (0:00:04.193) 0:00:55.615 **** 2026-02-18 05:01:07.496611 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496621 | orchestrator | 2026-02-18 05:01:07.496630 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-18 05:01:07.496640 | orchestrator | Wednesday 18 February 2026 04:59:38 +0000 (0:00:03.928) 0:00:59.544 **** 2026-02-18 05:01:07.496650 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496660 | orchestrator | 2026-02-18 05:01:07.496669 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-18 05:01:07.496679 | orchestrator | Wednesday 18 February 2026 04:59:42 +0000 (0:00:04.680) 0:01:04.225 **** 2026-02-18 05:01:07.496710 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496720 | orchestrator | 2026-02-18 05:01:07.496730 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-18 05:01:07.496747 | orchestrator | Wednesday 18 February 2026 04:59:47 +0000 (0:00:05.070) 0:01:09.295 **** 2026-02-18 05:01:07.496757 | orchestrator | changed: [localhost] 2026-02-18 05:01:07.496766 | orchestrator | 2026-02-18 05:01:07.496776 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-18 05:01:07.496785 | orchestrator | 2026-02-18 05:01:07.496795 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-18 05:01:07.496805 | orchestrator | Wednesday 18 February 2026 04:59:57 +0000 (0:00:09.771) 0:01:19.067 **** 2026-02-18 05:01:07.496815 | orchestrator | ok: [localhost] 2026-02-18 05:01:07.496825 | orchestrator | 2026-02-18 05:01:07.496834 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-18 05:01:07.496844 | orchestrator | Wednesday 18 February 2026 05:00:01 +0000 (0:00:03.617) 0:01:22.684 **** 2026-02-18 05:01:07.496853 | orchestrator | skipping: [localhost] 2026-02-18 05:01:07.496863 | orchestrator | 2026-02-18 05:01:07.496873 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-18 05:01:07.496882 | orchestrator | Wednesday 18 February 2026 05:00:01 +0000 (0:00:00.068) 0:01:22.752 **** 2026-02-18 05:01:07.496892 | orchestrator | skipping: [localhost] 2026-02-18 05:01:07.496902 | orchestrator | 2026-02-18 05:01:07.496925 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-18 05:01:07.496935 | orchestrator | Wednesday 18 February 2026 05:00:01 +0000 (0:00:00.074) 0:01:22.827 **** 2026-02-18 05:01:07.496964 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-18 05:01:07.496974 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-18 05:01:07.497000 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-18 05:01:07.497011 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-18 05:01:07.497021 | orchestrator | skipping: [localhost] => (item=test)  2026-02-18 05:01:07.497030 | orchestrator | skipping: [localhost] 2026-02-18 05:01:07.497040 | orchestrator | 2026-02-18 05:01:07.497050 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-18 05:01:07.497059 | orchestrator | Wednesday 18 February 2026 05:00:01 +0000 (0:00:00.205) 0:01:23.032 **** 2026-02-18 05:01:07.497069 | orchestrator | skipping: [localhost] 2026-02-18 05:01:07.497078 | orchestrator | 2026-02-18 05:01:07.497088 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-18 05:01:07.497097 | orchestrator | Wednesday 18 February 2026 05:00:01 +0000 (0:00:00.193) 0:01:23.225 **** 2026-02-18 05:01:07.497107 | orchestrator | changed: [localhost] => (item=test) 2026-02-18 05:01:07.497117 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-18 05:01:07.497126 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-18 05:01:07.497136 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-18 05:01:07.497145 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-18 05:01:07.497155 | orchestrator | 2026-02-18 05:01:07.497164 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-18 05:01:07.497174 | orchestrator | Wednesday 18 February 2026 05:00:06 +0000 (0:00:05.130) 0:01:28.356 **** 2026-02-18 05:01:07.497184 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-18 05:01:07.497195 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-18 05:01:07.497204 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-18 05:01:07.497214 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-18 05:01:07.497225 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j858540822223.3745', 'results_file': '/ansible/.ansible_async/j858540822223.3745', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497238 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j802874516950.3770', 'results_file': '/ansible/.ansible_async/j802874516950.3770', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497255 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j3751976999.3795', 'results_file': '/ansible/.ansible_async/j3751976999.3795', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497266 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j587448012468.3820', 'results_file': '/ansible/.ansible_async/j587448012468.3820', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497276 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j491366719155.3845', 'results_file': '/ansible/.ansible_async/j491366719155.3845', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497285 | orchestrator | 2026-02-18 05:01:07.497295 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-18 05:01:07.497305 | orchestrator | Wednesday 18 February 2026 05:00:53 +0000 (0:00:46.645) 0:02:15.001 **** 2026-02-18 05:01:07.497314 | orchestrator | changed: [localhost] => (item=test) 2026-02-18 05:01:07.497324 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-18 05:01:07.497333 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-18 05:01:07.497343 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-18 05:01:07.497352 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-18 05:01:07.497362 | orchestrator | 2026-02-18 05:01:07.497371 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-18 05:01:07.497381 | orchestrator | Wednesday 18 February 2026 05:00:58 +0000 (0:00:04.506) 0:02:19.507 **** 2026-02-18 05:01:07.497391 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-18 05:01:07.497401 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j813158865093.3942', 'results_file': '/ansible/.ansible_async/j813158865093.3942', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497411 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j735436137237.3967', 'results_file': '/ansible/.ansible_async/j735436137237.3967', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497420 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j190369068209.3992', 'results_file': '/ansible/.ansible_async/j190369068209.3992', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:07.497444 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j493505084913.4017', 'results_file': '/ansible/.ansible_async/j493505084913.4017', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:47.366341 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j146911095047.4042', 'results_file': '/ansible/.ansible_async/j146911095047.4042', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:47.366456 | orchestrator | 2026-02-18 05:01:47.366473 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-18 05:01:47.366487 | orchestrator | Wednesday 18 February 2026 05:01:07 +0000 (0:00:09.439) 0:02:28.947 **** 2026-02-18 05:01:47.366498 | orchestrator | changed: [localhost] => (item=test) 2026-02-18 05:01:47.366511 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-18 05:01:47.366522 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-18 05:01:47.366533 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-18 05:01:47.366544 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-18 05:01:47.366580 | orchestrator | 2026-02-18 05:01:47.366592 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-18 05:01:47.366603 | orchestrator | Wednesday 18 February 2026 05:01:12 +0000 (0:00:05.131) 0:02:34.078 **** 2026-02-18 05:01:47.366615 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-18 05:01:47.366628 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j795303537843.4118', 'results_file': '/ansible/.ansible_async/j795303537843.4118', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:47.366639 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j520190478375.4143', 'results_file': '/ansible/.ansible_async/j520190478375.4143', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:47.366650 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j155705540089.4169', 'results_file': '/ansible/.ansible_async/j155705540089.4169', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:47.366662 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j563629933361.4195', 'results_file': '/ansible/.ansible_async/j563629933361.4195', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:47.366673 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j65895847934.4221', 'results_file': '/ansible/.ansible_async/j65895847934.4221', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-18 05:01:47.366684 | orchestrator | 2026-02-18 05:01:47.366695 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-18 05:01:47.366706 | orchestrator | Wednesday 18 February 2026 05:01:22 +0000 (0:00:09.826) 0:02:43.905 **** 2026-02-18 05:01:47.366716 | orchestrator | changed: [localhost] 2026-02-18 05:01:47.366727 | orchestrator | 2026-02-18 05:01:47.366738 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-18 05:01:47.366749 | orchestrator | Wednesday 18 February 2026 05:01:28 +0000 (0:00:06.469) 0:02:50.375 **** 2026-02-18 05:01:47.366760 | orchestrator | changed: [localhost] 2026-02-18 05:01:47.366770 | orchestrator | 2026-02-18 05:01:47.366781 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-18 05:01:47.366792 | orchestrator | Wednesday 18 February 2026 05:01:42 +0000 (0:00:13.391) 0:03:03.766 **** 2026-02-18 05:01:47.366803 | orchestrator | ok: [localhost] 2026-02-18 05:01:47.366814 | orchestrator | 2026-02-18 05:01:47.366839 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-18 05:01:47.366850 | orchestrator | Wednesday 18 February 2026 05:01:47 +0000 (0:00:04.749) 0:03:08.515 **** 2026-02-18 05:01:47.366861 | orchestrator | ok: [localhost] => { 2026-02-18 05:01:47.366872 | orchestrator |  "msg": "192.168.112.152" 2026-02-18 05:01:47.366883 | orchestrator | } 2026-02-18 05:01:47.366894 | orchestrator | 2026-02-18 05:01:47.366908 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:01:47.366921 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 05:01:47.366935 | orchestrator | 2026-02-18 05:01:47.366981 | orchestrator | 2026-02-18 05:01:47.366995 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:01:47.367008 | orchestrator | Wednesday 18 February 2026 05:01:47 +0000 (0:00:00.042) 0:03:08.558 **** 2026-02-18 05:01:47.367021 | orchestrator | =============================================================================== 2026-02-18 05:01:47.367033 | orchestrator | Wait for instance creation to complete --------------------------------- 46.65s 2026-02-18 05:01:47.367047 | orchestrator | Attach test volume ----------------------------------------------------- 13.39s 2026-02-18 05:01:47.367060 | orchestrator | Add member roles to user test ------------------------------------------ 11.80s 2026-02-18 05:01:47.367096 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.83s 2026-02-18 05:01:47.367110 | orchestrator | Create test router ------------------------------------------------------ 9.77s 2026-02-18 05:01:47.367123 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.44s 2026-02-18 05:01:47.367135 | orchestrator | Create test volume ------------------------------------------------------ 6.47s 2026-02-18 05:01:47.367167 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.40s 2026-02-18 05:01:47.367180 | orchestrator | Add tag to instances ---------------------------------------------------- 5.13s 2026-02-18 05:01:47.367193 | orchestrator | Create test instances --------------------------------------------------- 5.13s 2026-02-18 05:01:47.367205 | orchestrator | Create test subnet ------------------------------------------------------ 5.07s 2026-02-18 05:01:47.367217 | orchestrator | Create floating ip address ---------------------------------------------- 4.75s 2026-02-18 05:01:47.367230 | orchestrator | Create ssh security group ----------------------------------------------- 4.69s 2026-02-18 05:01:47.367242 | orchestrator | Create test network ----------------------------------------------------- 4.68s 2026-02-18 05:01:47.367255 | orchestrator | Add metadata to instances ----------------------------------------------- 4.51s 2026-02-18 05:01:47.367266 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.30s 2026-02-18 05:01:47.367276 | orchestrator | Create test server group ------------------------------------------------ 4.27s 2026-02-18 05:01:47.367287 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.19s 2026-02-18 05:01:47.367298 | orchestrator | Create test user -------------------------------------------------------- 4.12s 2026-02-18 05:01:47.367309 | orchestrator | Create test-admin user -------------------------------------------------- 4.10s 2026-02-18 05:01:47.669831 | orchestrator | + server_list 2026-02-18 05:01:47.669927 | orchestrator | + openstack --os-cloud test server list 2026-02-18 05:01:51.401361 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-18 05:01:51.401472 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-18 05:01:51.401484 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-18 05:01:51.401493 | orchestrator | | 209181ca-3fc2-48e0-adc2-c1e346b47e7c | test-4 | ACTIVE | test=192.168.112.156, 192.168.200.228 | N/A (booted from volume) | SCS-1L-1 | 2026-02-18 05:01:51.401501 | orchestrator | | d8d5bcca-6556-4d6b-9c76-83fecf1795ab | test-3 | ACTIVE | test=192.168.112.143, 192.168.200.65 | N/A (booted from volume) | SCS-1L-1 | 2026-02-18 05:01:51.401508 | orchestrator | | b0341f59-126d-4d63-9c87-ee2cffb93706 | test-2 | ACTIVE | test=192.168.112.171, 192.168.200.205 | N/A (booted from volume) | SCS-1L-1 | 2026-02-18 05:01:51.401515 | orchestrator | | e37ce189-2222-4fea-809c-506d5cf9d17a | test-1 | ACTIVE | test=192.168.112.144, 192.168.200.135 | N/A (booted from volume) | SCS-1L-1 | 2026-02-18 05:01:51.401524 | orchestrator | | f5e1f0de-9d01-471b-978d-9ae8ca0ae944 | test | ACTIVE | test=192.168.112.152, 192.168.200.98 | N/A (booted from volume) | SCS-1L-1 | 2026-02-18 05:01:51.401531 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-18 05:01:51.670574 | orchestrator | + openstack --os-cloud test server show test 2026-02-18 05:01:54.785574 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:01:54.785687 | orchestrator | | Field | Value | 2026-02-18 05:01:54.785723 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:01:54.785741 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-18 05:01:54.785754 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-18 05:01:54.785765 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-18 05:01:54.785776 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-18 05:01:54.785788 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-18 05:01:54.785799 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-18 05:01:54.785828 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-18 05:01:54.785840 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-18 05:01:54.785868 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-18 05:01:54.785888 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-18 05:01:54.785915 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-18 05:01:54.785934 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-18 05:01:54.785981 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-18 05:01:54.785998 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-18 05:01:54.786139 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-18 05:01:54.786165 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-18T05:00:36.000000 | 2026-02-18 05:01:54.786192 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-18 05:01:54.786223 | orchestrator | | accessIPv4 | | 2026-02-18 05:01:54.786237 | orchestrator | | accessIPv6 | | 2026-02-18 05:01:54.786251 | orchestrator | | addresses | test=192.168.112.152, 192.168.200.98 | 2026-02-18 05:01:54.786268 | orchestrator | | config_drive | | 2026-02-18 05:01:54.786279 | orchestrator | | created | 2026-02-18T05:00:10Z | 2026-02-18 05:01:54.786291 | orchestrator | | description | None | 2026-02-18 05:01:54.786302 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-18 05:01:54.786313 | orchestrator | | hostId | 99d52ce17f47b94ade6ae73c24ed3c077adc4a99d434bbe1c66cc00c | 2026-02-18 05:01:54.786324 | orchestrator | | host_status | None | 2026-02-18 05:01:54.786361 | orchestrator | | id | f5e1f0de-9d01-471b-978d-9ae8ca0ae944 | 2026-02-18 05:01:54.786380 | orchestrator | | image | N/A (booted from volume) | 2026-02-18 05:01:54.786398 | orchestrator | | key_name | test | 2026-02-18 05:01:54.786415 | orchestrator | | locked | False | 2026-02-18 05:01:54.786435 | orchestrator | | locked_reason | None | 2026-02-18 05:01:54.786452 | orchestrator | | name | test | 2026-02-18 05:01:54.786473 | orchestrator | | pinned_availability_zone | None | 2026-02-18 05:01:54.786487 | orchestrator | | progress | 0 | 2026-02-18 05:01:54.786499 | orchestrator | | project_id | 5f6fe02975fd46b3908081e5e6908fb4 | 2026-02-18 05:01:54.786510 | orchestrator | | properties | hostname='test' | 2026-02-18 05:01:54.786547 | orchestrator | | security_groups | name='ssh' | 2026-02-18 05:01:54.786559 | orchestrator | | | name='icmp' | 2026-02-18 05:01:54.786571 | orchestrator | | server_groups | None | 2026-02-18 05:01:54.786582 | orchestrator | | status | ACTIVE | 2026-02-18 05:01:54.786602 | orchestrator | | tags | test | 2026-02-18 05:01:54.786614 | orchestrator | | trusted_image_certificates | None | 2026-02-18 05:01:54.786625 | orchestrator | | updated | 2026-02-18T05:00:59Z | 2026-02-18 05:01:54.786637 | orchestrator | | user_id | c3da3eee66364f908829c23c4fdef4c3 | 2026-02-18 05:01:54.786648 | orchestrator | | volumes_attached | delete_on_termination='True', id='d3876b5a-f3e1-4494-a317-6dc57115eed8' | 2026-02-18 05:01:54.786666 | orchestrator | | | delete_on_termination='False', id='e1e58ee7-2619-4aa4-b7c2-cd76b47178f6' | 2026-02-18 05:01:54.789154 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:01:55.047786 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-18 05:01:58.318169 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:01:58.318275 | orchestrator | | Field | Value | 2026-02-18 05:01:58.318301 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:01:58.318315 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-18 05:01:58.318327 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-18 05:01:58.318339 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-18 05:01:58.318351 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-18 05:01:58.318381 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-18 05:01:58.318393 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-18 05:01:58.318426 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-18 05:01:58.318438 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-18 05:01:58.318450 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-18 05:01:58.318466 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-18 05:01:58.318478 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-18 05:01:58.318490 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-18 05:01:58.318502 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-18 05:01:58.318520 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-18 05:01:58.318532 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-18 05:01:58.318544 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-18T05:00:36.000000 | 2026-02-18 05:01:58.318563 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-18 05:01:58.318578 | orchestrator | | accessIPv4 | | 2026-02-18 05:01:58.318593 | orchestrator | | accessIPv6 | | 2026-02-18 05:01:58.318612 | orchestrator | | addresses | test=192.168.112.144, 192.168.200.135 | 2026-02-18 05:01:58.318626 | orchestrator | | config_drive | | 2026-02-18 05:01:58.318639 | orchestrator | | created | 2026-02-18T05:00:12Z | 2026-02-18 05:01:58.318659 | orchestrator | | description | None | 2026-02-18 05:01:58.318672 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-18 05:01:58.318685 | orchestrator | | hostId | 99d52ce17f47b94ade6ae73c24ed3c077adc4a99d434bbe1c66cc00c | 2026-02-18 05:01:58.318698 | orchestrator | | host_status | None | 2026-02-18 05:01:58.318719 | orchestrator | | id | e37ce189-2222-4fea-809c-506d5cf9d17a | 2026-02-18 05:01:58.318733 | orchestrator | | image | N/A (booted from volume) | 2026-02-18 05:01:58.318747 | orchestrator | | key_name | test | 2026-02-18 05:01:58.318765 | orchestrator | | locked | False | 2026-02-18 05:01:58.318778 | orchestrator | | locked_reason | None | 2026-02-18 05:01:58.318792 | orchestrator | | name | test-1 | 2026-02-18 05:01:58.318812 | orchestrator | | pinned_availability_zone | None | 2026-02-18 05:01:58.318826 | orchestrator | | progress | 0 | 2026-02-18 05:01:58.318839 | orchestrator | | project_id | 5f6fe02975fd46b3908081e5e6908fb4 | 2026-02-18 05:01:58.318853 | orchestrator | | properties | hostname='test-1' | 2026-02-18 05:01:58.318874 | orchestrator | | security_groups | name='ssh' | 2026-02-18 05:01:58.318888 | orchestrator | | | name='icmp' | 2026-02-18 05:01:58.318902 | orchestrator | | server_groups | None | 2026-02-18 05:01:58.318915 | orchestrator | | status | ACTIVE | 2026-02-18 05:01:58.318929 | orchestrator | | tags | test | 2026-02-18 05:01:58.319008 | orchestrator | | trusted_image_certificates | None | 2026-02-18 05:01:58.319022 | orchestrator | | updated | 2026-02-18T05:00:59Z | 2026-02-18 05:01:58.319033 | orchestrator | | user_id | c3da3eee66364f908829c23c4fdef4c3 | 2026-02-18 05:01:58.319044 | orchestrator | | volumes_attached | delete_on_termination='True', id='44fb24bc-419b-44d8-82c2-d99ad5b6cc84' | 2026-02-18 05:01:58.322102 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:01:58.616802 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-18 05:02:01.665101 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:01.665231 | orchestrator | | Field | Value | 2026-02-18 05:02:01.665276 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:01.665304 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-18 05:02:01.665353 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-18 05:02:01.665375 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-18 05:02:01.665394 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-18 05:02:01.665415 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-18 05:02:01.665434 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-18 05:02:01.665469 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-18 05:02:01.665482 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-18 05:02:01.665493 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-18 05:02:01.665504 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-18 05:02:01.665529 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-18 05:02:01.665541 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-18 05:02:01.665552 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-18 05:02:01.665563 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-18 05:02:01.665575 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-18 05:02:01.665586 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-18T05:00:36.000000 | 2026-02-18 05:02:01.665606 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-18 05:02:01.665620 | orchestrator | | accessIPv4 | | 2026-02-18 05:02:01.665634 | orchestrator | | accessIPv6 | | 2026-02-18 05:02:01.665660 | orchestrator | | addresses | test=192.168.112.171, 192.168.200.205 | 2026-02-18 05:02:01.665674 | orchestrator | | config_drive | | 2026-02-18 05:02:01.665688 | orchestrator | | created | 2026-02-18T05:00:12Z | 2026-02-18 05:02:01.665702 | orchestrator | | description | None | 2026-02-18 05:02:01.665715 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-18 05:02:01.665728 | orchestrator | | hostId | 99d52ce17f47b94ade6ae73c24ed3c077adc4a99d434bbe1c66cc00c | 2026-02-18 05:02:01.665741 | orchestrator | | host_status | None | 2026-02-18 05:02:01.665762 | orchestrator | | id | b0341f59-126d-4d63-9c87-ee2cffb93706 | 2026-02-18 05:02:01.665776 | orchestrator | | image | N/A (booted from volume) | 2026-02-18 05:02:01.665790 | orchestrator | | key_name | test | 2026-02-18 05:02:01.665815 | orchestrator | | locked | False | 2026-02-18 05:02:01.665830 | orchestrator | | locked_reason | None | 2026-02-18 05:02:01.665843 | orchestrator | | name | test-2 | 2026-02-18 05:02:01.665856 | orchestrator | | pinned_availability_zone | None | 2026-02-18 05:02:01.665870 | orchestrator | | progress | 0 | 2026-02-18 05:02:01.665883 | orchestrator | | project_id | 5f6fe02975fd46b3908081e5e6908fb4 | 2026-02-18 05:02:01.665896 | orchestrator | | properties | hostname='test-2' | 2026-02-18 05:02:01.665917 | orchestrator | | security_groups | name='ssh' | 2026-02-18 05:02:01.665931 | orchestrator | | | name='icmp' | 2026-02-18 05:02:01.665983 | orchestrator | | server_groups | None | 2026-02-18 05:02:01.666001 | orchestrator | | status | ACTIVE | 2026-02-18 05:02:01.666071 | orchestrator | | tags | test | 2026-02-18 05:02:01.666085 | orchestrator | | trusted_image_certificates | None | 2026-02-18 05:02:01.666096 | orchestrator | | updated | 2026-02-18T05:01:00Z | 2026-02-18 05:02:01.666107 | orchestrator | | user_id | c3da3eee66364f908829c23c4fdef4c3 | 2026-02-18 05:02:01.666119 | orchestrator | | volumes_attached | delete_on_termination='True', id='2006cdee-85a3-4f6b-9c3e-c411c2367dc9' | 2026-02-18 05:02:01.670274 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:01.968707 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-18 05:02:05.079597 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:05.079723 | orchestrator | | Field | Value | 2026-02-18 05:02:05.079740 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:05.079766 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-18 05:02:05.079778 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-18 05:02:05.079790 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-18 05:02:05.079802 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-18 05:02:05.079813 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-18 05:02:05.079824 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-18 05:02:05.079854 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-18 05:02:05.079875 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-18 05:02:05.079886 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-18 05:02:05.079898 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-18 05:02:05.079914 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-18 05:02:05.079926 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-18 05:02:05.079937 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-18 05:02:05.079993 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-18 05:02:05.080004 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-18 05:02:05.080016 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-18T05:00:39.000000 | 2026-02-18 05:02:05.080037 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-18 05:02:05.080056 | orchestrator | | accessIPv4 | | 2026-02-18 05:02:05.080068 | orchestrator | | accessIPv6 | | 2026-02-18 05:02:05.080079 | orchestrator | | addresses | test=192.168.112.143, 192.168.200.65 | 2026-02-18 05:02:05.080503 | orchestrator | | config_drive | | 2026-02-18 05:02:05.080521 | orchestrator | | created | 2026-02-18T05:00:13Z | 2026-02-18 05:02:05.080533 | orchestrator | | description | None | 2026-02-18 05:02:05.080544 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-18 05:02:05.080556 | orchestrator | | hostId | 99d52ce17f47b94ade6ae73c24ed3c077adc4a99d434bbe1c66cc00c | 2026-02-18 05:02:05.080567 | orchestrator | | host_status | None | 2026-02-18 05:02:05.080595 | orchestrator | | id | d8d5bcca-6556-4d6b-9c76-83fecf1795ab | 2026-02-18 05:02:05.080612 | orchestrator | | image | N/A (booted from volume) | 2026-02-18 05:02:05.080624 | orchestrator | | key_name | test | 2026-02-18 05:02:05.080635 | orchestrator | | locked | False | 2026-02-18 05:02:05.080646 | orchestrator | | locked_reason | None | 2026-02-18 05:02:05.080684 | orchestrator | | name | test-3 | 2026-02-18 05:02:05.080697 | orchestrator | | pinned_availability_zone | None | 2026-02-18 05:02:05.080708 | orchestrator | | progress | 0 | 2026-02-18 05:02:05.080720 | orchestrator | | project_id | 5f6fe02975fd46b3908081e5e6908fb4 | 2026-02-18 05:02:05.080738 | orchestrator | | properties | hostname='test-3' | 2026-02-18 05:02:05.080757 | orchestrator | | security_groups | name='ssh' | 2026-02-18 05:02:05.080774 | orchestrator | | | name='icmp' | 2026-02-18 05:02:05.080786 | orchestrator | | server_groups | None | 2026-02-18 05:02:05.080798 | orchestrator | | status | ACTIVE | 2026-02-18 05:02:05.080809 | orchestrator | | tags | test | 2026-02-18 05:02:05.080821 | orchestrator | | trusted_image_certificates | None | 2026-02-18 05:02:05.080832 | orchestrator | | updated | 2026-02-18T05:01:01Z | 2026-02-18 05:02:05.080843 | orchestrator | | user_id | c3da3eee66364f908829c23c4fdef4c3 | 2026-02-18 05:02:05.080866 | orchestrator | | volumes_attached | delete_on_termination='True', id='ab11ac5d-c75a-4368-a9a3-823327df237d' | 2026-02-18 05:02:05.084391 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:05.425412 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-18 05:02:08.473456 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:08.473578 | orchestrator | | Field | Value | 2026-02-18 05:02:08.473596 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:08.473608 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-18 05:02:08.473620 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-18 05:02:08.473631 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-18 05:02:08.473642 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-18 05:02:08.473677 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-18 05:02:08.473689 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-18 05:02:08.473718 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-18 05:02:08.473736 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-18 05:02:08.473761 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-18 05:02:08.473774 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-18 05:02:08.473785 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-18 05:02:08.473796 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-18 05:02:08.473808 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-18 05:02:08.473819 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-18 05:02:08.473839 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-18 05:02:08.473851 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-18T05:00:39.000000 | 2026-02-18 05:02:08.473871 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-18 05:02:08.473888 | orchestrator | | accessIPv4 | | 2026-02-18 05:02:08.473899 | orchestrator | | accessIPv6 | | 2026-02-18 05:02:08.473911 | orchestrator | | addresses | test=192.168.112.156, 192.168.200.228 | 2026-02-18 05:02:08.473922 | orchestrator | | config_drive | | 2026-02-18 05:02:08.473933 | orchestrator | | created | 2026-02-18T05:00:14Z | 2026-02-18 05:02:08.474001 | orchestrator | | description | None | 2026-02-18 05:02:08.474098 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-18 05:02:08.474114 | orchestrator | | hostId | b63d15b38f83c9463924daa4e847ce953296b113e1d32271c19cf991 | 2026-02-18 05:02:08.474128 | orchestrator | | host_status | None | 2026-02-18 05:02:08.474153 | orchestrator | | id | 209181ca-3fc2-48e0-adc2-c1e346b47e7c | 2026-02-18 05:02:08.474182 | orchestrator | | image | N/A (booted from volume) | 2026-02-18 05:02:08.474202 | orchestrator | | key_name | test | 2026-02-18 05:02:08.474223 | orchestrator | | locked | False | 2026-02-18 05:02:08.474244 | orchestrator | | locked_reason | None | 2026-02-18 05:02:08.474263 | orchestrator | | name | test-4 | 2026-02-18 05:02:08.474292 | orchestrator | | pinned_availability_zone | None | 2026-02-18 05:02:08.474312 | orchestrator | | progress | 0 | 2026-02-18 05:02:08.474333 | orchestrator | | project_id | 5f6fe02975fd46b3908081e5e6908fb4 | 2026-02-18 05:02:08.474352 | orchestrator | | properties | hostname='test-4' | 2026-02-18 05:02:08.474385 | orchestrator | | security_groups | name='ssh' | 2026-02-18 05:02:08.474414 | orchestrator | | | name='icmp' | 2026-02-18 05:02:08.474427 | orchestrator | | server_groups | None | 2026-02-18 05:02:08.474438 | orchestrator | | status | ACTIVE | 2026-02-18 05:02:08.474449 | orchestrator | | tags | test | 2026-02-18 05:02:08.474469 | orchestrator | | trusted_image_certificates | None | 2026-02-18 05:02:08.474480 | orchestrator | | updated | 2026-02-18T05:01:02Z | 2026-02-18 05:02:08.474491 | orchestrator | | user_id | c3da3eee66364f908829c23c4fdef4c3 | 2026-02-18 05:02:08.474503 | orchestrator | | volumes_attached | delete_on_termination='True', id='13ea6830-0a85-49df-bf18-9fee2c28bb03' | 2026-02-18 05:02:08.477898 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-18 05:02:08.731524 | orchestrator | + server_ping 2026-02-18 05:02:08.732536 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-18 05:02:08.732733 | orchestrator | ++ tr -d '\r' 2026-02-18 05:02:11.710267 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-18 05:02:11.710358 | orchestrator | + ping -c3 192.168.112.143 2026-02-18 05:02:11.723706 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-02-18 05:02:11.723777 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=6.61 ms 2026-02-18 05:02:12.721837 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.45 ms 2026-02-18 05:02:13.722692 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.65 ms 2026-02-18 05:02:13.722795 | orchestrator | 2026-02-18 05:02:13.722811 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-02-18 05:02:13.722823 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-18 05:02:13.722834 | orchestrator | rtt min/avg/max/mdev = 1.645/3.568/6.611/2.176 ms 2026-02-18 05:02:13.723504 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-18 05:02:13.723592 | orchestrator | + ping -c3 192.168.112.144 2026-02-18 05:02:13.735577 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2026-02-18 05:02:13.735642 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=7.54 ms 2026-02-18 05:02:14.732729 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=2.55 ms 2026-02-18 05:02:15.732905 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=1.94 ms 2026-02-18 05:02:15.733092 | orchestrator | 2026-02-18 05:02:15.733111 | orchestrator | --- 192.168.112.144 ping statistics --- 2026-02-18 05:02:15.733188 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-18 05:02:15.733231 | orchestrator | rtt min/avg/max/mdev = 1.943/4.011/7.538/2.505 ms 2026-02-18 05:02:15.733570 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-18 05:02:15.733597 | orchestrator | + ping -c3 192.168.112.156 2026-02-18 05:02:15.746816 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-02-18 05:02:15.746889 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=7.90 ms 2026-02-18 05:02:16.743045 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.32 ms 2026-02-18 05:02:17.745999 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.71 ms 2026-02-18 05:02:17.746190 | orchestrator | 2026-02-18 05:02:17.746207 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-02-18 05:02:17.746219 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-18 05:02:17.746312 | orchestrator | rtt min/avg/max/mdev = 1.708/3.975/7.899/2.785 ms 2026-02-18 05:02:17.746332 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-18 05:02:17.746347 | orchestrator | + ping -c3 192.168.112.152 2026-02-18 05:02:17.756026 | orchestrator | PING 192.168.112.152 (192.168.112.152) 56(84) bytes of data. 2026-02-18 05:02:17.756073 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=1 ttl=63 time=5.67 ms 2026-02-18 05:02:18.755305 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=2 ttl=63 time=2.58 ms 2026-02-18 05:02:19.756936 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=3 ttl=63 time=1.85 ms 2026-02-18 05:02:19.757075 | orchestrator | 2026-02-18 05:02:19.757091 | orchestrator | --- 192.168.112.152 ping statistics --- 2026-02-18 05:02:19.757104 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-18 05:02:19.757115 | orchestrator | rtt min/avg/max/mdev = 1.847/3.367/5.673/1.657 ms 2026-02-18 05:02:19.757127 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-18 05:02:19.757138 | orchestrator | + ping -c3 192.168.112.171 2026-02-18 05:02:19.768321 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-02-18 05:02:19.768413 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=6.63 ms 2026-02-18 05:02:20.766270 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.36 ms 2026-02-18 05:02:21.767824 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=1.85 ms 2026-02-18 05:02:21.767931 | orchestrator | 2026-02-18 05:02:21.767995 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-02-18 05:02:21.768009 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-18 05:02:21.768020 | orchestrator | rtt min/avg/max/mdev = 1.853/3.614/6.632/2.143 ms 2026-02-18 05:02:21.768336 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-18 05:02:22.041364 | orchestrator | ok: Runtime: 0:10:23.922107 2026-02-18 05:02:22.093249 | 2026-02-18 05:02:22.093390 | TASK [Run tempest] 2026-02-18 05:02:22.630530 | orchestrator | skipping: Conditional result was False 2026-02-18 05:02:22.649020 | 2026-02-18 05:02:22.649192 | TASK [Check prometheus alert status] 2026-02-18 05:02:23.184566 | orchestrator | skipping: Conditional result was False 2026-02-18 05:02:23.199912 | 2026-02-18 05:02:23.200100 | PLAY [Upgrade testbed] 2026-02-18 05:02:23.213512 | 2026-02-18 05:02:23.213645 | TASK [Print next ceph version] 2026-02-18 05:02:23.286624 | orchestrator | ok 2026-02-18 05:02:23.301309 | 2026-02-18 05:02:23.301540 | TASK [Print next openstack version] 2026-02-18 05:02:23.374374 | orchestrator | ok 2026-02-18 05:02:23.385623 | 2026-02-18 05:02:23.385751 | TASK [Print next manager version] 2026-02-18 05:02:23.464579 | orchestrator | ok 2026-02-18 05:02:23.476006 | 2026-02-18 05:02:23.476136 | TASK [Set cloud fact (Zuul deployment)] 2026-02-18 05:02:23.522080 | orchestrator | ok 2026-02-18 05:02:23.532436 | 2026-02-18 05:02:23.532559 | TASK [Set cloud fact (local deployment)] 2026-02-18 05:02:23.557478 | orchestrator | skipping: Conditional result was False 2026-02-18 05:02:23.573907 | 2026-02-18 05:02:23.574056 | TASK [Fetch manager address] 2026-02-18 05:02:23.837065 | orchestrator | ok 2026-02-18 05:02:23.846824 | 2026-02-18 05:02:23.847019 | TASK [Set manager_host address] 2026-02-18 05:02:23.927630 | orchestrator | ok 2026-02-18 05:02:23.938789 | 2026-02-18 05:02:23.938962 | TASK [Run upgrade] 2026-02-18 05:02:24.633337 | orchestrator | + set -e 2026-02-18 05:02:24.633525 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-18 05:02:24.633551 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-18 05:02:24.633574 | orchestrator | + CEPH_VERSION=reef 2026-02-18 05:02:24.633587 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-18 05:02:24.633600 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-18 05:02:24.633623 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-18 05:02:24.641971 | orchestrator | + set -e 2026-02-18 05:02:24.642071 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 05:02:24.642804 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 05:02:24.642893 | orchestrator | ++ INTERACTIVE=false 2026-02-18 05:02:24.642907 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 05:02:24.642926 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 05:02:24.644150 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-18 05:02:24.689431 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-18 05:02:24.690184 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-18 05:02:24.724762 | orchestrator | 2026-02-18 05:02:24.724885 | orchestrator | # UPGRADE MANAGER 2026-02-18 05:02:24.724915 | orchestrator | 2026-02-18 05:02:24.724926 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-18 05:02:24.724937 | orchestrator | + echo 2026-02-18 05:02:24.724972 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-18 05:02:24.724986 | orchestrator | + echo 2026-02-18 05:02:24.724997 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-18 05:02:24.725008 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-18 05:02:24.725018 | orchestrator | + CEPH_VERSION=reef 2026-02-18 05:02:24.725029 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-18 05:02:24.725039 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-18 05:02:24.725049 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-18 05:02:24.731279 | orchestrator | + set -e 2026-02-18 05:02:24.731350 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-18 05:02:24.731363 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-18 05:02:24.736728 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-18 05:02:24.736770 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-18 05:02:24.739883 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-18 05:02:24.745365 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-18 05:02:24.755499 | orchestrator | /opt/configuration ~ 2026-02-18 05:02:24.755559 | orchestrator | + set -e 2026-02-18 05:02:24.755571 | orchestrator | + pushd /opt/configuration 2026-02-18 05:02:24.755583 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 05:02:24.755600 | orchestrator | + source /opt/venv/bin/activate 2026-02-18 05:02:24.756920 | orchestrator | ++ deactivate nondestructive 2026-02-18 05:02:24.757003 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:24.757026 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:24.757048 | orchestrator | ++ hash -r 2026-02-18 05:02:24.757069 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:24.757090 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-18 05:02:24.757109 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-18 05:02:24.757158 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-18 05:02:24.757196 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-18 05:02:24.757215 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-18 05:02:24.757236 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-18 05:02:24.757257 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-18 05:02:24.757278 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 05:02:24.757299 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 05:02:24.757318 | orchestrator | ++ export PATH 2026-02-18 05:02:24.757338 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:24.757364 | orchestrator | ++ '[' -z '' ']' 2026-02-18 05:02:24.757383 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-18 05:02:24.757403 | orchestrator | ++ PS1='(venv) ' 2026-02-18 05:02:24.757423 | orchestrator | ++ export PS1 2026-02-18 05:02:24.757442 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-18 05:02:24.757461 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-18 05:02:24.757480 | orchestrator | ++ hash -r 2026-02-18 05:02:24.757505 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-18 05:02:25.842505 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-18 05:02:25.843502 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-18 05:02:25.844982 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-18 05:02:25.846426 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-18 05:02:25.847783 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-18 05:02:25.859270 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-18 05:02:25.859744 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-18 05:02:25.860987 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-18 05:02:25.862758 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-18 05:02:25.905619 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-18 05:02:25.907501 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-18 05:02:25.909430 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-18 05:02:25.910635 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-18 05:02:25.914781 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-18 05:02:26.169488 | orchestrator | ++ which gilt 2026-02-18 05:02:26.172549 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-18 05:02:26.172567 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-18 05:02:26.411468 | orchestrator | osism.cfg-generics: 2026-02-18 05:02:26.529874 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-18 05:02:26.530334 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-18 05:02:26.532177 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-18 05:02:26.532212 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-18 05:02:27.511762 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-18 05:02:27.527248 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-18 05:02:28.030694 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-18 05:02:28.085611 | orchestrator | ~ 2026-02-18 05:02:28.085733 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 05:02:28.085752 | orchestrator | + deactivate 2026-02-18 05:02:28.085765 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-18 05:02:28.085779 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 05:02:28.085790 | orchestrator | + export PATH 2026-02-18 05:02:28.085801 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-18 05:02:28.085814 | orchestrator | + '[' -n '' ']' 2026-02-18 05:02:28.085825 | orchestrator | + hash -r 2026-02-18 05:02:28.085836 | orchestrator | + '[' -n '' ']' 2026-02-18 05:02:28.085847 | orchestrator | + unset VIRTUAL_ENV 2026-02-18 05:02:28.085858 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-18 05:02:28.085869 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-18 05:02:28.085880 | orchestrator | + unset -f deactivate 2026-02-18 05:02:28.085891 | orchestrator | + popd 2026-02-18 05:02:28.089686 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-18 05:02:28.089802 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-18 05:02:28.098301 | orchestrator | + set -e 2026-02-18 05:02:28.098345 | orchestrator | + NAMESPACE=kolla/release 2026-02-18 05:02:28.098360 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-18 05:02:28.104932 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-18 05:02:28.110266 | orchestrator | /opt/configuration ~ 2026-02-18 05:02:28.110331 | orchestrator | + set -e 2026-02-18 05:02:28.110376 | orchestrator | + pushd /opt/configuration 2026-02-18 05:02:28.110390 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 05:02:28.110448 | orchestrator | + source /opt/venv/bin/activate 2026-02-18 05:02:28.110459 | orchestrator | ++ deactivate nondestructive 2026-02-18 05:02:28.110470 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:28.110482 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:28.110493 | orchestrator | ++ hash -r 2026-02-18 05:02:28.110503 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:28.110514 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-18 05:02:28.110525 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-18 05:02:28.110537 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-18 05:02:28.110548 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-18 05:02:28.110559 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-18 05:02:28.110570 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-18 05:02:28.110586 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-18 05:02:28.110598 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 05:02:28.110617 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 05:02:28.110629 | orchestrator | ++ export PATH 2026-02-18 05:02:28.110640 | orchestrator | ++ '[' -n '' ']' 2026-02-18 05:02:28.110651 | orchestrator | ++ '[' -z '' ']' 2026-02-18 05:02:28.110662 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-18 05:02:28.110673 | orchestrator | ++ PS1='(venv) ' 2026-02-18 05:02:28.110684 | orchestrator | ++ export PS1 2026-02-18 05:02:28.110694 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-18 05:02:28.110705 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-18 05:02:28.110716 | orchestrator | ++ hash -r 2026-02-18 05:02:28.110727 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-18 05:02:28.696753 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-18 05:02:28.697352 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-18 05:02:28.699017 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-18 05:02:28.700668 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-18 05:02:28.701925 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-18 05:02:28.712539 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-18 05:02:28.714200 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-18 05:02:28.715194 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-18 05:02:28.716773 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-18 05:02:28.758216 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-18 05:02:28.759874 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-18 05:02:28.762062 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-18 05:02:28.763389 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-18 05:02:28.767376 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-18 05:02:29.021865 | orchestrator | ++ which gilt 2026-02-18 05:02:29.023786 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-18 05:02:29.024340 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-18 05:02:29.241468 | orchestrator | osism.cfg-generics: 2026-02-18 05:02:29.411123 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-18 05:02:29.411938 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-18 05:02:29.412056 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-18 05:02:29.412074 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-18 05:02:30.027270 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-18 05:02:30.043119 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-18 05:02:30.446719 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-18 05:02:30.513012 | orchestrator | ~ 2026-02-18 05:02:30.513126 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-18 05:02:30.513152 | orchestrator | + deactivate 2026-02-18 05:02:30.513200 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-18 05:02:30.513223 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-18 05:02:30.513242 | orchestrator | + export PATH 2026-02-18 05:02:30.513262 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-18 05:02:30.513284 | orchestrator | + '[' -n '' ']' 2026-02-18 05:02:30.513304 | orchestrator | + hash -r 2026-02-18 05:02:30.513324 | orchestrator | + '[' -n '' ']' 2026-02-18 05:02:30.513359 | orchestrator | + unset VIRTUAL_ENV 2026-02-18 05:02:30.513379 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-18 05:02:30.513413 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-18 05:02:30.513432 | orchestrator | + unset -f deactivate 2026-02-18 05:02:30.513450 | orchestrator | + popd 2026-02-18 05:02:30.515121 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-18 05:02:30.571626 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-18 05:02:30.572530 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-18 05:02:30.673772 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 05:02:30.673891 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-18 05:02:30.683228 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-18 05:02:30.691554 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-18 05:02:30.763341 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-18 05:02:30.764564 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-18 05:02:30.877144 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-18 05:02:30.877240 | orchestrator | ++ echo true 2026-02-18 05:02:30.877256 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-18 05:02:30.879476 | orchestrator | +++ semver 2024.2 2024.2 2026-02-18 05:02:30.967648 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-18 05:02:30.968714 | orchestrator | +++ semver 2024.2 2025.1 2026-02-18 05:02:31.015524 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-18 05:02:31.015620 | orchestrator | ++ echo false 2026-02-18 05:02:31.015630 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-18 05:02:31.015733 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-18 05:02:31.015745 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-18 05:02:31.015752 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-18 05:02:31.015761 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-18 05:02:31.019939 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-18 05:02:31.019997 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-18 05:02:31.033372 | orchestrator | export RABBITMQ3TO4=true 2026-02-18 05:02:31.036226 | orchestrator | + osism update manager 2026-02-18 05:02:36.965464 | orchestrator | Collecting uv 2026-02-18 05:02:37.083793 | orchestrator | Downloading uv-0.10.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-18 05:02:37.107602 | orchestrator | Downloading uv-0.10.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.1 MB) 2026-02-18 05:02:37.897729 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.1/23.1 MB 34.7 MB/s eta 0:00:00 2026-02-18 05:02:37.975280 | orchestrator | Installing collected packages: uv 2026-02-18 05:02:38.474341 | orchestrator | Successfully installed uv-0.10.4 2026-02-18 05:02:39.037217 | orchestrator | Resolved 11 packages in 282ms 2026-02-18 05:02:39.072104 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-18 05:02:39.072555 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-18 05:02:39.072585 | orchestrator | Downloading ansible (54.5MiB) 2026-02-18 05:02:39.072801 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-18 05:02:39.545937 | orchestrator | Downloaded netaddr 2026-02-18 05:02:39.583301 | orchestrator | Downloaded ansible-core 2026-02-18 05:02:39.740081 | orchestrator | Downloaded cryptography 2026-02-18 05:02:46.306910 | orchestrator | Downloaded ansible 2026-02-18 05:02:46.307252 | orchestrator | Prepared 11 packages in 7.27s 2026-02-18 05:02:46.807756 | orchestrator | Installed 11 packages in 498ms 2026-02-18 05:02:46.807847 | orchestrator | + ansible==11.11.0 2026-02-18 05:02:46.807862 | orchestrator | + ansible-core==2.18.13 2026-02-18 05:02:46.807874 | orchestrator | + cffi==2.0.0 2026-02-18 05:02:46.807886 | orchestrator | + cryptography==46.0.5 2026-02-18 05:02:46.807898 | orchestrator | + jinja2==3.1.6 2026-02-18 05:02:46.807909 | orchestrator | + markupsafe==3.0.3 2026-02-18 05:02:46.807921 | orchestrator | + netaddr==1.3.0 2026-02-18 05:02:46.807932 | orchestrator | + packaging==26.0 2026-02-18 05:02:46.807943 | orchestrator | + pycparser==3.0 2026-02-18 05:02:46.807980 | orchestrator | + pyyaml==6.0.3 2026-02-18 05:02:46.807992 | orchestrator | + resolvelib==1.0.1 2026-02-18 05:02:47.908248 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-202598n1crqjp3/tmperukesx7/ansible-collection-servicesu81tceq3'... 2026-02-18 05:02:49.551219 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-18 05:02:49.551319 | orchestrator | Already on 'main' 2026-02-18 05:02:50.047260 | orchestrator | Starting galaxy collection install process 2026-02-18 05:02:50.047361 | orchestrator | Process install dependency map 2026-02-18 05:02:50.047378 | orchestrator | Starting collection install process 2026-02-18 05:02:50.047390 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-18 05:02:50.047404 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-18 05:02:50.047415 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-18 05:02:50.631665 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-202667vu7d2zel/tmpb8e61c31/ansible-playbooks-managerjyt1vf06'... 2026-02-18 05:02:51.187161 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-18 05:02:51.187273 | orchestrator | Already on 'main' 2026-02-18 05:02:51.458904 | orchestrator | Starting galaxy collection install process 2026-02-18 05:02:51.459072 | orchestrator | Process install dependency map 2026-02-18 05:02:51.459091 | orchestrator | Starting collection install process 2026-02-18 05:02:51.459105 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-18 05:02:51.459118 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-18 05:02:51.459129 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-18 05:02:52.140801 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-18 05:02:52.140905 | orchestrator | -vvvv to see details 2026-02-18 05:02:52.635789 | orchestrator | 2026-02-18 05:02:52.635891 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-18 05:02:52.635910 | orchestrator | 2026-02-18 05:02:52.635921 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-18 05:02:56.966577 | orchestrator | ok: [testbed-manager] 2026-02-18 05:02:56.966674 | orchestrator | 2026-02-18 05:02:56.966687 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-18 05:02:57.046836 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 05:02:57.046925 | orchestrator | 2026-02-18 05:02:57.047068 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-18 05:02:58.803820 | orchestrator | ok: [testbed-manager] 2026-02-18 05:02:58.803894 | orchestrator | 2026-02-18 05:02:58.803903 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-18 05:02:58.858226 | orchestrator | ok: [testbed-manager] 2026-02-18 05:02:58.858331 | orchestrator | 2026-02-18 05:02:58.858347 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-18 05:02:58.946728 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-18 05:02:58.946825 | orchestrator | 2026-02-18 05:02:58.946846 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-18 05:03:03.215628 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-18 05:03:03.215754 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-18 05:03:03.215776 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-18 05:03:03.215809 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-18 05:03:03.215827 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-18 05:03:03.215844 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-18 05:03:03.215862 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-18 05:03:03.215880 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-18 05:03:03.215897 | orchestrator | 2026-02-18 05:03:03.215917 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-18 05:03:04.388037 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:04.388136 | orchestrator | 2026-02-18 05:03:04.388152 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-18 05:03:05.405152 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:05.405266 | orchestrator | 2026-02-18 05:03:05.405283 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-18 05:03:05.490690 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-18 05:03:05.490815 | orchestrator | 2026-02-18 05:03:05.490842 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-18 05:03:07.519736 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-18 05:03:07.519845 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-18 05:03:07.519861 | orchestrator | 2026-02-18 05:03:07.519875 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-18 05:03:08.507461 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:08.507574 | orchestrator | 2026-02-18 05:03:08.507592 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-18 05:03:08.577412 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:03:08.577512 | orchestrator | 2026-02-18 05:03:08.577530 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-18 05:03:08.675406 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-18 05:03:08.675505 | orchestrator | 2026-02-18 05:03:08.675519 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-18 05:03:09.709416 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:09.709519 | orchestrator | 2026-02-18 05:03:09.709537 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-18 05:03:09.778268 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-18 05:03:09.778401 | orchestrator | 2026-02-18 05:03:09.778427 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-18 05:03:11.824562 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-18 05:03:11.824669 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-18 05:03:11.824685 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:11.824699 | orchestrator | 2026-02-18 05:03:11.824711 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-18 05:03:12.776256 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:12.777029 | orchestrator | 2026-02-18 05:03:12.777063 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-18 05:03:12.841591 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:03:12.841689 | orchestrator | 2026-02-18 05:03:12.841710 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-18 05:03:12.940627 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-18 05:03:12.940731 | orchestrator | 2026-02-18 05:03:12.940747 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-18 05:03:13.635661 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:13.635761 | orchestrator | 2026-02-18 05:03:13.635778 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-18 05:03:14.212527 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:14.212634 | orchestrator | 2026-02-18 05:03:14.212677 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-18 05:03:16.125114 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-18 05:03:16.125222 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-18 05:03:16.125237 | orchestrator | 2026-02-18 05:03:16.125250 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-18 05:03:17.262468 | orchestrator | changed: [testbed-manager] 2026-02-18 05:03:17.262566 | orchestrator | 2026-02-18 05:03:17.262582 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-18 05:03:17.806398 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:17.807363 | orchestrator | 2026-02-18 05:03:17.807411 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-18 05:03:18.362731 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:18.362831 | orchestrator | 2026-02-18 05:03:18.362871 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-18 05:03:18.419464 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:03:18.419577 | orchestrator | 2026-02-18 05:03:18.419601 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-18 05:03:18.495695 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-18 05:03:18.495790 | orchestrator | 2026-02-18 05:03:18.495806 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-18 05:03:18.546431 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:18.546518 | orchestrator | 2026-02-18 05:03:18.546533 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-18 05:03:21.447877 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-18 05:03:21.447992 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-18 05:03:21.448002 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-18 05:03:21.448008 | orchestrator | 2026-02-18 05:03:21.448015 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-18 05:03:22.472565 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:22.472667 | orchestrator | 2026-02-18 05:03:22.472684 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-18 05:03:23.512059 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:23.512797 | orchestrator | 2026-02-18 05:03:23.512819 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-18 05:03:24.521817 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:24.521929 | orchestrator | 2026-02-18 05:03:24.521946 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-18 05:03:24.597948 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-18 05:03:24.598111 | orchestrator | 2026-02-18 05:03:24.598127 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-18 05:03:24.662755 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:24.662846 | orchestrator | 2026-02-18 05:03:24.662859 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-18 05:03:25.616928 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-18 05:03:25.617102 | orchestrator | 2026-02-18 05:03:25.617121 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-18 05:03:25.709313 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-18 05:03:25.709410 | orchestrator | 2026-02-18 05:03:25.709424 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-18 05:03:26.737331 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:26.737432 | orchestrator | 2026-02-18 05:03:26.737449 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-18 05:03:27.823228 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:27.823329 | orchestrator | 2026-02-18 05:03:27.823346 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-18 05:03:27.885753 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:03:27.885848 | orchestrator | 2026-02-18 05:03:27.885864 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-18 05:03:27.943630 | orchestrator | ok: [testbed-manager] 2026-02-18 05:03:27.943748 | orchestrator | 2026-02-18 05:03:27.943771 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-18 05:03:29.292245 | orchestrator | changed: [testbed-manager] 2026-02-18 05:03:29.293146 | orchestrator | 2026-02-18 05:03:29.293173 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-18 05:04:39.462388 | orchestrator | changed: [testbed-manager] 2026-02-18 05:04:39.462508 | orchestrator | 2026-02-18 05:04:39.462525 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-18 05:04:40.677111 | orchestrator | ok: [testbed-manager] 2026-02-18 05:04:40.677196 | orchestrator | 2026-02-18 05:04:40.677205 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-18 05:04:40.745613 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:04:40.745729 | orchestrator | 2026-02-18 05:04:40.745745 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-18 05:04:41.604344 | orchestrator | ok: [testbed-manager] 2026-02-18 05:04:41.604427 | orchestrator | 2026-02-18 05:04:41.604439 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-18 05:04:41.686586 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:04:41.686659 | orchestrator | 2026-02-18 05:04:41.686668 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-18 05:04:41.686676 | orchestrator | 2026-02-18 05:04:41.686682 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-18 05:04:56.296687 | orchestrator | changed: [testbed-manager] 2026-02-18 05:04:56.296811 | orchestrator | 2026-02-18 05:04:56.296828 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-18 05:05:56.384527 | orchestrator | Pausing for 60 seconds 2026-02-18 05:05:56.384642 | orchestrator | changed: [testbed-manager] 2026-02-18 05:05:56.384660 | orchestrator | 2026-02-18 05:05:56.384673 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-18 05:05:56.435024 | orchestrator | ok: [testbed-manager] 2026-02-18 05:05:56.435168 | orchestrator | 2026-02-18 05:05:56.435191 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-18 05:06:00.008971 | orchestrator | changed: [testbed-manager] 2026-02-18 05:06:00.009128 | orchestrator | 2026-02-18 05:06:00.009154 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-18 05:07:03.027245 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-18 05:07:03.027407 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-18 05:07:03.027423 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-18 05:07:03.027435 | orchestrator | changed: [testbed-manager] 2026-02-18 05:07:03.027448 | orchestrator | 2026-02-18 05:07:03.027460 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-18 05:07:14.644716 | orchestrator | changed: [testbed-manager] 2026-02-18 05:07:14.644847 | orchestrator | 2026-02-18 05:07:14.644867 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-18 05:07:14.753549 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-18 05:07:14.753684 | orchestrator | 2026-02-18 05:07:14.753702 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-18 05:07:14.753715 | orchestrator | 2026-02-18 05:07:14.753726 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-18 05:07:14.822277 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:07:14.822378 | orchestrator | 2026-02-18 05:07:14.822395 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-18 05:07:14.907417 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-18 05:07:14.907507 | orchestrator | 2026-02-18 05:07:14.907545 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-18 05:07:16.019915 | orchestrator | changed: [testbed-manager] 2026-02-18 05:07:16.020054 | orchestrator | 2026-02-18 05:07:16.020068 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-18 05:07:19.792533 | orchestrator | ok: [testbed-manager] 2026-02-18 05:07:19.792651 | orchestrator | 2026-02-18 05:07:19.792660 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-18 05:07:19.903772 | orchestrator | ok: [testbed-manager] => { 2026-02-18 05:07:19.903870 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-18 05:07:19.903878 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-18 05:07:19.903884 | orchestrator | "Checking running containers against expected versions...", 2026-02-18 05:07:19.903890 | orchestrator | "", 2026-02-18 05:07:19.903896 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-18 05:07:19.903901 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-18 05:07:19.903907 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.903911 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-18 05:07:19.903916 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.903921 | orchestrator | "", 2026-02-18 05:07:19.903926 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-18 05:07:19.903931 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-18 05:07:19.903936 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.903941 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-18 05:07:19.903945 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.903950 | orchestrator | "", 2026-02-18 05:07:19.903954 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-18 05:07:19.903959 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-18 05:07:19.903964 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.903968 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-18 05:07:19.903972 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.903977 | orchestrator | "", 2026-02-18 05:07:19.903981 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-18 05:07:19.903986 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-18 05:07:19.904029 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904035 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-18 05:07:19.904039 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904043 | orchestrator | "", 2026-02-18 05:07:19.904048 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-18 05:07:19.904053 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-18 05:07:19.904057 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904062 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-18 05:07:19.904066 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904070 | orchestrator | "", 2026-02-18 05:07:19.904075 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-18 05:07:19.904102 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904106 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904111 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904115 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904120 | orchestrator | "", 2026-02-18 05:07:19.904124 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-18 05:07:19.904129 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-18 05:07:19.904133 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904138 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-18 05:07:19.904142 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904146 | orchestrator | "", 2026-02-18 05:07:19.904151 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-18 05:07:19.904155 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-18 05:07:19.904159 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904170 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-18 05:07:19.904175 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904179 | orchestrator | "", 2026-02-18 05:07:19.904184 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-18 05:07:19.904188 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-18 05:07:19.904192 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904197 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-18 05:07:19.904201 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904206 | orchestrator | "", 2026-02-18 05:07:19.904214 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-18 05:07:19.904219 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-18 05:07:19.904223 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904228 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-18 05:07:19.904232 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904236 | orchestrator | "", 2026-02-18 05:07:19.904241 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-18 05:07:19.904245 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904249 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904254 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904258 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904263 | orchestrator | "", 2026-02-18 05:07:19.904267 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-18 05:07:19.904271 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904276 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904280 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904284 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904289 | orchestrator | "", 2026-02-18 05:07:19.904293 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-18 05:07:19.904297 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904302 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904306 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904311 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904316 | orchestrator | "", 2026-02-18 05:07:19.904321 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-18 05:07:19.904326 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904331 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904336 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904356 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904361 | orchestrator | "", 2026-02-18 05:07:19.904367 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-18 05:07:19.904372 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904381 | orchestrator | " Enabled: true", 2026-02-18 05:07:19.904387 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-18 05:07:19.904392 | orchestrator | " Status: ✅ MATCH", 2026-02-18 05:07:19.904397 | orchestrator | "", 2026-02-18 05:07:19.904401 | orchestrator | "=== Summary ===", 2026-02-18 05:07:19.904406 | orchestrator | "Errors (version mismatches): 0", 2026-02-18 05:07:19.904412 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-18 05:07:19.904417 | orchestrator | "", 2026-02-18 05:07:19.904422 | orchestrator | "✅ All running containers match expected versions!" 2026-02-18 05:07:19.904427 | orchestrator | ] 2026-02-18 05:07:19.904432 | orchestrator | } 2026-02-18 05:07:19.904438 | orchestrator | 2026-02-18 05:07:19.904443 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-18 05:07:19.978482 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:07:19.978589 | orchestrator | 2026-02-18 05:07:19.978599 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:07:19.978608 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-18 05:07:19.978615 | orchestrator | 2026-02-18 05:07:32.605410 | orchestrator | 2026-02-18 05:07:32 | INFO  | Task 296c8610-3b3b-4be4-b8bb-5066c4e13e95 (sync inventory) is running in background. Output coming soon. 2026-02-18 05:08:02.204603 | orchestrator | 2026-02-18 05:07:34 | INFO  | Starting group_vars file reorganization 2026-02-18 05:08:02.204742 | orchestrator | 2026-02-18 05:07:34 | INFO  | Moved 0 file(s) to their respective directories 2026-02-18 05:08:02.204770 | orchestrator | 2026-02-18 05:07:34 | INFO  | Group_vars file reorganization completed 2026-02-18 05:08:02.204812 | orchestrator | 2026-02-18 05:07:37 | INFO  | Starting variable preparation from inventory 2026-02-18 05:08:02.204833 | orchestrator | 2026-02-18 05:07:40 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-18 05:08:02.204851 | orchestrator | 2026-02-18 05:07:40 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-18 05:08:02.204870 | orchestrator | 2026-02-18 05:07:40 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-18 05:08:02.204886 | orchestrator | 2026-02-18 05:07:40 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-18 05:08:02.204903 | orchestrator | 2026-02-18 05:07:40 | INFO  | Variable preparation completed 2026-02-18 05:08:02.204920 | orchestrator | 2026-02-18 05:07:42 | INFO  | Starting inventory overwrite handling 2026-02-18 05:08:02.204939 | orchestrator | 2026-02-18 05:07:42 | INFO  | Handling group overwrites in 99-overwrite 2026-02-18 05:08:02.204956 | orchestrator | 2026-02-18 05:07:42 | INFO  | Removing group frr:children from 60-generic 2026-02-18 05:08:02.204973 | orchestrator | 2026-02-18 05:07:42 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-18 05:08:02.204989 | orchestrator | 2026-02-18 05:07:42 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-18 05:08:02.205038 | orchestrator | 2026-02-18 05:07:42 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-18 05:08:02.205056 | orchestrator | 2026-02-18 05:07:42 | INFO  | Handling group overwrites in 20-roles 2026-02-18 05:08:02.205074 | orchestrator | 2026-02-18 05:07:42 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-18 05:08:02.205092 | orchestrator | 2026-02-18 05:07:42 | INFO  | Removed 5 group(s) in total 2026-02-18 05:08:02.205109 | orchestrator | 2026-02-18 05:07:42 | INFO  | Inventory overwrite handling completed 2026-02-18 05:08:02.205126 | orchestrator | 2026-02-18 05:07:43 | INFO  | Starting merge of inventory files 2026-02-18 05:08:02.205144 | orchestrator | 2026-02-18 05:07:43 | INFO  | Inventory files merged successfully 2026-02-18 05:08:02.205192 | orchestrator | 2026-02-18 05:07:48 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-18 05:08:02.205214 | orchestrator | 2026-02-18 05:08:00 | INFO  | Successfully wrote ClusterShell configuration 2026-02-18 05:08:02.545344 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-18 05:08:02.545479 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-18 05:08:02.545495 | orchestrator | + local max_attempts=60 2026-02-18 05:08:02.545510 | orchestrator | + local name=kolla-ansible 2026-02-18 05:08:02.545524 | orchestrator | + local attempt_num=1 2026-02-18 05:08:02.546141 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-18 05:08:02.583383 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 05:08:02.583500 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-18 05:08:02.583514 | orchestrator | + local max_attempts=60 2026-02-18 05:08:02.583527 | orchestrator | + local name=osism-ansible 2026-02-18 05:08:02.583539 | orchestrator | + local attempt_num=1 2026-02-18 05:08:02.584552 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-18 05:08:02.624952 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-18 05:08:02.625098 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-18 05:08:02.814462 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-18 05:08:02.814591 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-18 05:08:02.814606 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-18 05:08:02.814619 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-18 05:08:02.814636 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-18 05:08:02.814647 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-18 05:08:02.814658 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-18 05:08:02.814669 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-18 05:08:02.814680 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 21 seconds ago 2026-02-18 05:08:02.814691 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-18 05:08:02.814702 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-18 05:08:02.814712 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-18 05:08:02.814723 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-18 05:08:02.814765 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-18 05:08:02.814777 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-18 05:08:02.814788 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-18 05:08:02.822466 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-18 05:08:02.822541 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-18 05:08:02.822553 | orchestrator | + osism apply facts 2026-02-18 05:08:15.077329 | orchestrator | 2026-02-18 05:08:15 | INFO  | Task 637c2b9f-14cf-4524-b382-040fc8414f9a (facts) was prepared for execution. 2026-02-18 05:08:15.077434 | orchestrator | 2026-02-18 05:08:15 | INFO  | It takes a moment until task 637c2b9f-14cf-4524-b382-040fc8414f9a (facts) has been started and output is visible here. 2026-02-18 05:08:38.267235 | orchestrator | 2026-02-18 05:08:38.267322 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-18 05:08:38.267330 | orchestrator | 2026-02-18 05:08:38.267335 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-18 05:08:38.267340 | orchestrator | Wednesday 18 February 2026 05:08:21 +0000 (0:00:02.177) 0:00:02.177 **** 2026-02-18 05:08:38.267346 | orchestrator | ok: [testbed-manager] 2026-02-18 05:08:38.267352 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:08:38.267357 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:08:38.267361 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:08:38.267366 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:08:38.267371 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:08:38.267376 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:08:38.267380 | orchestrator | 2026-02-18 05:08:38.267385 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-18 05:08:38.267390 | orchestrator | Wednesday 18 February 2026 05:08:25 +0000 (0:00:03.556) 0:00:05.734 **** 2026-02-18 05:08:38.267395 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:08:38.267401 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:08:38.267406 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:08:38.267410 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:08:38.267415 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:08:38.267419 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:08:38.267424 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:08:38.267428 | orchestrator | 2026-02-18 05:08:38.267433 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-18 05:08:38.267437 | orchestrator | 2026-02-18 05:08:38.267442 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-18 05:08:38.267447 | orchestrator | Wednesday 18 February 2026 05:08:27 +0000 (0:00:02.585) 0:00:08.320 **** 2026-02-18 05:08:38.267451 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:08:38.267471 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:08:38.267477 | orchestrator | ok: [testbed-manager] 2026-02-18 05:08:38.267481 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:08:38.267488 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:08:38.267493 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:08:38.267497 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:08:38.267502 | orchestrator | 2026-02-18 05:08:38.267507 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-18 05:08:38.267511 | orchestrator | 2026-02-18 05:08:38.267516 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-18 05:08:38.267520 | orchestrator | Wednesday 18 February 2026 05:08:34 +0000 (0:00:07.169) 0:00:15.489 **** 2026-02-18 05:08:38.267527 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:08:38.267555 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:08:38.267563 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:08:38.267570 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:08:38.267577 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:08:38.267584 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:08:38.267592 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:08:38.267599 | orchestrator | 2026-02-18 05:08:38.267607 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:08:38.267614 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:08:38.267623 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:08:38.267629 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:08:38.267634 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:08:38.267638 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:08:38.267643 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:08:38.267647 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:08:38.267652 | orchestrator | 2026-02-18 05:08:38.267656 | orchestrator | 2026-02-18 05:08:38.267661 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:08:38.267665 | orchestrator | Wednesday 18 February 2026 05:08:37 +0000 (0:00:02.738) 0:00:18.228 **** 2026-02-18 05:08:38.267670 | orchestrator | =============================================================================== 2026-02-18 05:08:38.267674 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.17s 2026-02-18 05:08:38.267679 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.56s 2026-02-18 05:08:38.267683 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.74s 2026-02-18 05:08:38.267688 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.59s 2026-02-18 05:08:38.576519 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-18 05:08:38.682425 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 05:08:38.683260 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-18 05:08:38.723081 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-18 05:08:38.723161 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-18 05:08:38.731250 | orchestrator | + set -e 2026-02-18 05:08:38.731305 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-18 05:08:38.731320 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-18 05:08:38.741600 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-18 05:08:38.751472 | orchestrator | 2026-02-18 05:08:38.751507 | orchestrator | # UPGRADE SERVICES 2026-02-18 05:08:38.751520 | orchestrator | 2026-02-18 05:08:38.751533 | orchestrator | + set -e 2026-02-18 05:08:38.751545 | orchestrator | + echo 2026-02-18 05:08:38.751556 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-18 05:08:38.751568 | orchestrator | + echo 2026-02-18 05:08:38.751579 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 05:08:38.752499 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 05:08:38.752531 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 05:08:38.752550 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 05:08:38.752624 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 05:08:38.752643 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 05:08:38.752663 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 05:08:38.752682 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 05:08:38.752736 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 05:08:38.752754 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 05:08:38.752771 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 05:08:38.752787 | orchestrator | ++ export ARA=false 2026-02-18 05:08:38.752806 | orchestrator | ++ ARA=false 2026-02-18 05:08:38.753002 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 05:08:38.753069 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 05:08:38.753088 | orchestrator | ++ export TEMPEST=false 2026-02-18 05:08:38.753106 | orchestrator | ++ TEMPEST=false 2026-02-18 05:08:38.753123 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 05:08:38.753138 | orchestrator | ++ IS_ZUUL=true 2026-02-18 05:08:38.753155 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 05:08:38.753171 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 05:08:38.753188 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 05:08:38.753204 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 05:08:38.753222 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 05:08:38.753241 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 05:08:38.753259 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 05:08:38.753278 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 05:08:38.753296 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 05:08:38.753314 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 05:08:38.753332 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-18 05:08:38.753350 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-18 05:08:38.753370 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-18 05:08:38.753389 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-18 05:08:38.753410 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-18 05:08:38.762198 | orchestrator | + set -e 2026-02-18 05:08:38.762257 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 05:08:38.762872 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 05:08:38.762913 | orchestrator | ++ INTERACTIVE=false 2026-02-18 05:08:38.762934 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 05:08:38.763003 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 05:08:38.763636 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 05:08:38.763664 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 05:08:38.763675 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 05:08:38.763686 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 05:08:38.763697 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 05:08:38.763708 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 05:08:38.763720 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 05:08:38.763750 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 05:08:38.763761 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 05:08:38.763773 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 05:08:38.763784 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 05:08:38.763795 | orchestrator | ++ export ARA=false 2026-02-18 05:08:38.763805 | orchestrator | ++ ARA=false 2026-02-18 05:08:38.763816 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 05:08:38.763827 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 05:08:38.763838 | orchestrator | ++ export TEMPEST=false 2026-02-18 05:08:38.763849 | orchestrator | ++ TEMPEST=false 2026-02-18 05:08:38.763859 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 05:08:38.763870 | orchestrator | ++ IS_ZUUL=true 2026-02-18 05:08:38.763881 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 05:08:38.763892 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 05:08:38.763903 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 05:08:38.763913 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 05:08:38.763924 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 05:08:38.763934 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 05:08:38.763946 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 05:08:38.763958 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 05:08:38.763969 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 05:08:38.763980 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 05:08:38.763991 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-18 05:08:38.764001 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-18 05:08:38.764060 | orchestrator | 2026-02-18 05:08:38.764074 | orchestrator | # PULL IMAGES 2026-02-18 05:08:38.764085 | orchestrator | 2026-02-18 05:08:38.764096 | orchestrator | + echo 2026-02-18 05:08:38.764107 | orchestrator | + echo '# PULL IMAGES' 2026-02-18 05:08:38.764118 | orchestrator | + echo 2026-02-18 05:08:38.765087 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-18 05:08:38.834236 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 05:08:38.834333 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-18 05:08:40.924905 | orchestrator | 2026-02-18 05:08:40 | INFO  | Trying to run play pull-images in environment custom 2026-02-18 05:08:51.034628 | orchestrator | 2026-02-18 05:08:51 | INFO  | Task ef11e2a5-cbff-468b-82d3-96f8e51353d3 (pull-images) was prepared for execution. 2026-02-18 05:08:51.034770 | orchestrator | 2026-02-18 05:08:51 | INFO  | Task ef11e2a5-cbff-468b-82d3-96f8e51353d3 is running in background. No more output. Check ARA for logs. 2026-02-18 05:08:51.364811 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-18 05:08:51.379109 | orchestrator | + set -e 2026-02-18 05:08:51.379166 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 05:08:51.379176 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 05:08:51.379182 | orchestrator | ++ INTERACTIVE=false 2026-02-18 05:08:51.379188 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 05:08:51.379194 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 05:08:51.379228 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-18 05:08:51.381622 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-18 05:08:51.394182 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-18 05:08:51.394213 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-18 05:08:51.394787 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-18 05:08:51.453172 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-18 05:08:51.453259 | orchestrator | + osism apply frr 2026-02-18 05:09:03.599289 | orchestrator | 2026-02-18 05:09:03 | INFO  | Task e8a70cff-6fb2-4d2f-825a-2afc73f64283 (frr) was prepared for execution. 2026-02-18 05:09:03.599368 | orchestrator | 2026-02-18 05:09:03 | INFO  | It takes a moment until task e8a70cff-6fb2-4d2f-825a-2afc73f64283 (frr) has been started and output is visible here. 2026-02-18 05:09:36.486919 | orchestrator | 2026-02-18 05:09:36.487116 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-18 05:09:36.487149 | orchestrator | 2026-02-18 05:09:36.487169 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-18 05:09:36.487189 | orchestrator | Wednesday 18 February 2026 05:09:12 +0000 (0:00:04.142) 0:00:04.142 **** 2026-02-18 05:09:36.487208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 05:09:36.487227 | orchestrator | 2026-02-18 05:09:36.487239 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-18 05:09:36.487251 | orchestrator | Wednesday 18 February 2026 05:09:14 +0000 (0:00:01.854) 0:00:05.996 **** 2026-02-18 05:09:36.487262 | orchestrator | ok: [testbed-manager] 2026-02-18 05:09:36.487274 | orchestrator | 2026-02-18 05:09:36.487285 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-18 05:09:36.487297 | orchestrator | Wednesday 18 February 2026 05:09:16 +0000 (0:00:02.440) 0:00:08.439 **** 2026-02-18 05:09:36.487308 | orchestrator | ok: [testbed-manager] 2026-02-18 05:09:36.487318 | orchestrator | 2026-02-18 05:09:36.487329 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-18 05:09:36.487340 | orchestrator | Wednesday 18 February 2026 05:09:19 +0000 (0:00:02.811) 0:00:11.251 **** 2026-02-18 05:09:36.487351 | orchestrator | ok: [testbed-manager] 2026-02-18 05:09:36.487362 | orchestrator | 2026-02-18 05:09:36.487373 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-18 05:09:36.487384 | orchestrator | Wednesday 18 February 2026 05:09:21 +0000 (0:00:01.924) 0:00:13.175 **** 2026-02-18 05:09:36.487395 | orchestrator | ok: [testbed-manager] 2026-02-18 05:09:36.487407 | orchestrator | 2026-02-18 05:09:36.487417 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-18 05:09:36.487428 | orchestrator | Wednesday 18 February 2026 05:09:23 +0000 (0:00:01.905) 0:00:15.082 **** 2026-02-18 05:09:36.487439 | orchestrator | ok: [testbed-manager] 2026-02-18 05:09:36.487450 | orchestrator | 2026-02-18 05:09:36.487463 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-18 05:09:36.487477 | orchestrator | Wednesday 18 February 2026 05:09:25 +0000 (0:00:02.373) 0:00:17.455 **** 2026-02-18 05:09:36.487490 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:09:36.487534 | orchestrator | 2026-02-18 05:09:36.487547 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-18 05:09:36.487560 | orchestrator | Wednesday 18 February 2026 05:09:26 +0000 (0:00:01.202) 0:00:18.658 **** 2026-02-18 05:09:36.487572 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:09:36.487585 | orchestrator | 2026-02-18 05:09:36.487597 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-18 05:09:36.487610 | orchestrator | Wednesday 18 February 2026 05:09:28 +0000 (0:00:01.172) 0:00:19.831 **** 2026-02-18 05:09:36.487622 | orchestrator | ok: [testbed-manager] 2026-02-18 05:09:36.487635 | orchestrator | 2026-02-18 05:09:36.487647 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-18 05:09:36.487660 | orchestrator | Wednesday 18 February 2026 05:09:30 +0000 (0:00:01.986) 0:00:21.818 **** 2026-02-18 05:09:36.487672 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-18 05:09:36.487685 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-18 05:09:36.487699 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-18 05:09:36.487711 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-18 05:09:36.487724 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-18 05:09:36.487737 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-18 05:09:36.487750 | orchestrator | 2026-02-18 05:09:36.487781 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-18 05:09:36.487794 | orchestrator | Wednesday 18 February 2026 05:09:33 +0000 (0:00:03.534) 0:00:25.352 **** 2026-02-18 05:09:36.487808 | orchestrator | ok: [testbed-manager] 2026-02-18 05:09:36.487819 | orchestrator | 2026-02-18 05:09:36.487830 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:09:36.487842 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 05:09:36.487852 | orchestrator | 2026-02-18 05:09:36.487863 | orchestrator | 2026-02-18 05:09:36.487874 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:09:36.487941 | orchestrator | Wednesday 18 February 2026 05:09:36 +0000 (0:00:02.494) 0:00:27.847 **** 2026-02-18 05:09:36.487954 | orchestrator | =============================================================================== 2026-02-18 05:09:36.487965 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.53s 2026-02-18 05:09:36.487976 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.81s 2026-02-18 05:09:36.487987 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.49s 2026-02-18 05:09:36.487998 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.44s 2026-02-18 05:09:36.488009 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.37s 2026-02-18 05:09:36.488020 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.99s 2026-02-18 05:09:36.488050 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.92s 2026-02-18 05:09:36.488062 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.91s 2026-02-18 05:09:36.488093 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.86s 2026-02-18 05:09:36.488105 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.20s 2026-02-18 05:09:36.488116 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.17s 2026-02-18 05:09:36.805707 | orchestrator | + osism apply kubernetes 2026-02-18 05:09:38.893237 | orchestrator | 2026-02-18 05:09:38 | INFO  | Task 8ffea722-be1a-4a82-9ea5-25e520ad8dee (kubernetes) was prepared for execution. 2026-02-18 05:09:38.893391 | orchestrator | 2026-02-18 05:09:38 | INFO  | It takes a moment until task 8ffea722-be1a-4a82-9ea5-25e520ad8dee (kubernetes) has been started and output is visible here. 2026-02-18 05:10:24.711370 | orchestrator | 2026-02-18 05:10:24.711470 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-18 05:10:24.711483 | orchestrator | 2026-02-18 05:10:24.711492 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-18 05:10:24.711502 | orchestrator | Wednesday 18 February 2026 05:09:46 +0000 (0:00:02.935) 0:00:02.935 **** 2026-02-18 05:10:24.711510 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:10:24.711520 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:10:24.711528 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:10:24.711536 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:10:24.711544 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:10:24.711551 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:10:24.711559 | orchestrator | 2026-02-18 05:10:24.711568 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-18 05:10:24.711576 | orchestrator | Wednesday 18 February 2026 05:09:51 +0000 (0:00:05.067) 0:00:08.002 **** 2026-02-18 05:10:24.711584 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.711593 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.711601 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.711609 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.711617 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.711624 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.711632 | orchestrator | 2026-02-18 05:10:24.711640 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-18 05:10:24.711648 | orchestrator | Wednesday 18 February 2026 05:09:53 +0000 (0:00:02.063) 0:00:10.066 **** 2026-02-18 05:10:24.711656 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.711664 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.711671 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.711679 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.711687 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.711695 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.711703 | orchestrator | 2026-02-18 05:10:24.711711 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-18 05:10:24.711719 | orchestrator | Wednesday 18 February 2026 05:09:55 +0000 (0:00:01.950) 0:00:12.016 **** 2026-02-18 05:10:24.711727 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:10:24.711735 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:10:24.711743 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:10:24.711751 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:10:24.711759 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:10:24.711766 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:10:24.711774 | orchestrator | 2026-02-18 05:10:24.711782 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-18 05:10:24.711790 | orchestrator | Wednesday 18 February 2026 05:09:58 +0000 (0:00:02.635) 0:00:14.652 **** 2026-02-18 05:10:24.711798 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:10:24.711805 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:10:24.711813 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:10:24.711821 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:10:24.711829 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:10:24.711837 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:10:24.711845 | orchestrator | 2026-02-18 05:10:24.711853 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-18 05:10:24.711861 | orchestrator | Wednesday 18 February 2026 05:10:00 +0000 (0:00:02.591) 0:00:17.244 **** 2026-02-18 05:10:24.711869 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:10:24.711876 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:10:24.711884 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:10:24.711892 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:10:24.711900 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:10:24.711926 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:10:24.711935 | orchestrator | 2026-02-18 05:10:24.711944 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-18 05:10:24.711954 | orchestrator | Wednesday 18 February 2026 05:10:03 +0000 (0:00:02.325) 0:00:19.569 **** 2026-02-18 05:10:24.711962 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.711971 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.711980 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.711990 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.711998 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712007 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712016 | orchestrator | 2026-02-18 05:10:24.712025 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-18 05:10:24.712034 | orchestrator | Wednesday 18 February 2026 05:10:05 +0000 (0:00:02.084) 0:00:21.654 **** 2026-02-18 05:10:24.712068 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.712081 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.712093 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.712108 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.712121 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712135 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712146 | orchestrator | 2026-02-18 05:10:24.712155 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-18 05:10:24.712164 | orchestrator | Wednesday 18 February 2026 05:10:07 +0000 (0:00:01.998) 0:00:23.653 **** 2026-02-18 05:10:24.712173 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 05:10:24.712182 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 05:10:24.712191 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.712200 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 05:10:24.712219 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 05:10:24.712228 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.712237 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 05:10:24.712246 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 05:10:24.712255 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.712264 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 05:10:24.712273 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 05:10:24.712282 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.712304 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 05:10:24.712313 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 05:10:24.712320 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712328 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-18 05:10:24.712336 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-18 05:10:24.712344 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712352 | orchestrator | 2026-02-18 05:10:24.712360 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-18 05:10:24.712368 | orchestrator | Wednesday 18 February 2026 05:10:09 +0000 (0:00:02.220) 0:00:25.873 **** 2026-02-18 05:10:24.712376 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.712384 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.712392 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.712399 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.712407 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712415 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712423 | orchestrator | 2026-02-18 05:10:24.712438 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-18 05:10:24.712447 | orchestrator | Wednesday 18 February 2026 05:10:11 +0000 (0:00:02.141) 0:00:28.014 **** 2026-02-18 05:10:24.712455 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:10:24.712463 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:10:24.712471 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:10:24.712479 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:10:24.712486 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:10:24.712494 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:10:24.712502 | orchestrator | 2026-02-18 05:10:24.712510 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-18 05:10:24.712518 | orchestrator | Wednesday 18 February 2026 05:10:13 +0000 (0:00:02.085) 0:00:30.100 **** 2026-02-18 05:10:24.712526 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:10:24.712534 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:10:24.712542 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:10:24.712550 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:10:24.712562 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:10:24.712570 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:10:24.712578 | orchestrator | 2026-02-18 05:10:24.712586 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-18 05:10:24.712594 | orchestrator | Wednesday 18 February 2026 05:10:16 +0000 (0:00:02.657) 0:00:32.757 **** 2026-02-18 05:10:24.712602 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.712610 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.712618 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.712626 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.712634 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712641 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712649 | orchestrator | 2026-02-18 05:10:24.712657 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-18 05:10:24.712665 | orchestrator | Wednesday 18 February 2026 05:10:18 +0000 (0:00:01.922) 0:00:34.680 **** 2026-02-18 05:10:24.712673 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.712681 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.712689 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.712697 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.712705 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712712 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712720 | orchestrator | 2026-02-18 05:10:24.712728 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-18 05:10:24.712738 | orchestrator | Wednesday 18 February 2026 05:10:20 +0000 (0:00:02.173) 0:00:36.854 **** 2026-02-18 05:10:24.712746 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.712753 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.712761 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.712769 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.712777 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712785 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712793 | orchestrator | 2026-02-18 05:10:24.712805 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-18 05:10:24.712813 | orchestrator | Wednesday 18 February 2026 05:10:22 +0000 (0:00:01.766) 0:00:38.621 **** 2026-02-18 05:10:24.712821 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-18 05:10:24.712829 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-18 05:10:24.712836 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.712844 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-18 05:10:24.712852 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-18 05:10:24.712860 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.712868 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-18 05:10:24.712876 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-18 05:10:24.712890 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:10:24.712897 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-18 05:10:24.712905 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-18 05:10:24.712913 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:10:24.712921 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-18 05:10:24.712929 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-18 05:10:24.712937 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:10:24.712949 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-18 05:10:24.712962 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-18 05:10:24.712983 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:10:24.712996 | orchestrator | 2026-02-18 05:10:24.713009 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-18 05:10:24.713021 | orchestrator | Wednesday 18 February 2026 05:10:24 +0000 (0:00:02.023) 0:00:40.644 **** 2026-02-18 05:10:24.713033 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:10:24.713110 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:10:24.713133 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:12:10.251378 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:12:10.251476 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.251487 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.251495 | orchestrator | 2026-02-18 05:12:10.251505 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-18 05:12:10.251514 | orchestrator | Wednesday 18 February 2026 05:10:26 +0000 (0:00:01.839) 0:00:42.484 **** 2026-02-18 05:12:10.251522 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:12:10.251529 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:12:10.251537 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:12:10.251544 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:12:10.251551 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.251558 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.251565 | orchestrator | 2026-02-18 05:12:10.251573 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-18 05:12:10.251580 | orchestrator | 2026-02-18 05:12:10.251590 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-18 05:12:10.251603 | orchestrator | Wednesday 18 February 2026 05:10:28 +0000 (0:00:02.738) 0:00:45.223 **** 2026-02-18 05:12:10.251616 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.251630 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.251641 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.251653 | orchestrator | 2026-02-18 05:12:10.251665 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-18 05:12:10.251678 | orchestrator | Wednesday 18 February 2026 05:10:30 +0000 (0:00:02.021) 0:00:47.244 **** 2026-02-18 05:12:10.251689 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.251700 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.251712 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.251724 | orchestrator | 2026-02-18 05:12:10.251736 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-18 05:12:10.251748 | orchestrator | Wednesday 18 February 2026 05:10:32 +0000 (0:00:02.166) 0:00:49.411 **** 2026-02-18 05:12:10.251761 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:12:10.251773 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:12:10.251785 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:12:10.251797 | orchestrator | 2026-02-18 05:12:10.251830 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-18 05:12:10.251842 | orchestrator | Wednesday 18 February 2026 05:10:35 +0000 (0:00:02.097) 0:00:51.509 **** 2026-02-18 05:12:10.251849 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.251857 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.251864 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.251871 | orchestrator | 2026-02-18 05:12:10.251898 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-18 05:12:10.251906 | orchestrator | Wednesday 18 February 2026 05:10:36 +0000 (0:00:01.904) 0:00:53.413 **** 2026-02-18 05:12:10.251913 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:12:10.251920 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.251928 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.251935 | orchestrator | 2026-02-18 05:12:10.251943 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-18 05:12:10.251952 | orchestrator | Wednesday 18 February 2026 05:10:38 +0000 (0:00:01.385) 0:00:54.798 **** 2026-02-18 05:12:10.251960 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.251969 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.251977 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.251985 | orchestrator | 2026-02-18 05:12:10.251993 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-18 05:12:10.252001 | orchestrator | Wednesday 18 February 2026 05:10:40 +0000 (0:00:01.755) 0:00:56.554 **** 2026-02-18 05:12:10.252010 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.252018 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252026 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.252034 | orchestrator | 2026-02-18 05:12:10.252042 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-18 05:12:10.252050 | orchestrator | Wednesday 18 February 2026 05:10:42 +0000 (0:00:02.237) 0:00:58.792 **** 2026-02-18 05:12:10.252059 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:12:10.252093 | orchestrator | 2026-02-18 05:12:10.252102 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-18 05:12:10.252111 | orchestrator | Wednesday 18 February 2026 05:10:44 +0000 (0:00:01.933) 0:01:00.725 **** 2026-02-18 05:12:10.252119 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252127 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.252135 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.252143 | orchestrator | 2026-02-18 05:12:10.252151 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-18 05:12:10.252159 | orchestrator | Wednesday 18 February 2026 05:10:46 +0000 (0:00:02.579) 0:01:03.305 **** 2026-02-18 05:12:10.252166 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.252173 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252180 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.252187 | orchestrator | 2026-02-18 05:12:10.252194 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-18 05:12:10.252201 | orchestrator | Wednesday 18 February 2026 05:10:48 +0000 (0:00:01.789) 0:01:05.094 **** 2026-02-18 05:12:10.252209 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.252216 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.252223 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:12:10.252230 | orchestrator | 2026-02-18 05:12:10.252237 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-18 05:12:10.252244 | orchestrator | Wednesday 18 February 2026 05:10:50 +0000 (0:00:01.887) 0:01:06.982 **** 2026-02-18 05:12:10.252251 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.252258 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.252265 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:12:10.252273 | orchestrator | 2026-02-18 05:12:10.252280 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-18 05:12:10.252287 | orchestrator | Wednesday 18 February 2026 05:10:53 +0000 (0:00:02.474) 0:01:09.456 **** 2026-02-18 05:12:10.252294 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:12:10.252301 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.252324 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.252332 | orchestrator | 2026-02-18 05:12:10.252339 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-18 05:12:10.252346 | orchestrator | Wednesday 18 February 2026 05:10:54 +0000 (0:00:01.463) 0:01:10.919 **** 2026-02-18 05:12:10.252359 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:12:10.252367 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.252374 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.252381 | orchestrator | 2026-02-18 05:12:10.252388 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-18 05:12:10.252395 | orchestrator | Wednesday 18 February 2026 05:10:56 +0000 (0:00:01.662) 0:01:12.582 **** 2026-02-18 05:12:10.252402 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:12:10.252410 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:12:10.252417 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:12:10.252424 | orchestrator | 2026-02-18 05:12:10.252431 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-18 05:12:10.252438 | orchestrator | Wednesday 18 February 2026 05:10:58 +0000 (0:00:02.092) 0:01:14.675 **** 2026-02-18 05:12:10.252445 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252453 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.252460 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.252467 | orchestrator | 2026-02-18 05:12:10.252474 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-18 05:12:10.252481 | orchestrator | Wednesday 18 February 2026 05:11:00 +0000 (0:00:02.149) 0:01:16.824 **** 2026-02-18 05:12:10.252489 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252496 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.252503 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.252510 | orchestrator | 2026-02-18 05:12:10.252517 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-18 05:12:10.252524 | orchestrator | Wednesday 18 February 2026 05:11:01 +0000 (0:00:01.373) 0:01:18.197 **** 2026-02-18 05:12:10.252532 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-18 05:12:10.252541 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-18 05:12:10.252549 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-18 05:12:10.252556 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-18 05:12:10.252563 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-18 05:12:10.252570 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-18 05:12:10.252577 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-18 05:12:10.252584 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-18 05:12:10.252592 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-18 05:12:10.252599 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252606 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.252613 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.252620 | orchestrator | 2026-02-18 05:12:10.252628 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-18 05:12:10.252635 | orchestrator | Wednesday 18 February 2026 05:11:35 +0000 (0:00:34.002) 0:01:52.200 **** 2026-02-18 05:12:10.252642 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:12:10.252649 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:12:10.252656 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:12:10.252664 | orchestrator | 2026-02-18 05:12:10.252675 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-18 05:12:10.252683 | orchestrator | Wednesday 18 February 2026 05:11:37 +0000 (0:00:01.387) 0:01:53.588 **** 2026-02-18 05:12:10.252690 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:12:10.252697 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:12:10.252705 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:12:10.252712 | orchestrator | 2026-02-18 05:12:10.252719 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-18 05:12:10.252726 | orchestrator | Wednesday 18 February 2026 05:11:39 +0000 (0:00:02.154) 0:01:55.742 **** 2026-02-18 05:12:10.252733 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252741 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.252748 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.252755 | orchestrator | 2026-02-18 05:12:10.252762 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-18 05:12:10.252769 | orchestrator | Wednesday 18 February 2026 05:11:41 +0000 (0:00:02.214) 0:01:57.956 **** 2026-02-18 05:12:10.252776 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:12:10.252784 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:12:10.252837 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:12:10.252845 | orchestrator | 2026-02-18 05:12:10.252852 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-18 05:12:10.252859 | orchestrator | Wednesday 18 February 2026 05:12:08 +0000 (0:00:27.035) 0:02:24.992 **** 2026-02-18 05:12:10.252867 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:12:10.252874 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:12:10.252881 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:12:10.252888 | orchestrator | 2026-02-18 05:12:10.252895 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-18 05:12:10.252908 | orchestrator | Wednesday 18 February 2026 05:12:10 +0000 (0:00:01.686) 0:02:26.678 **** 2026-02-18 05:13:00.606723 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:13:00.606860 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:13:00.606877 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:13:00.606889 | orchestrator | 2026-02-18 05:13:00.606902 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-18 05:13:00.606915 | orchestrator | Wednesday 18 February 2026 05:12:11 +0000 (0:00:01.710) 0:02:28.389 **** 2026-02-18 05:13:00.606926 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:13:00.606939 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:13:00.606950 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:13:00.606961 | orchestrator | 2026-02-18 05:13:00.606972 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-18 05:13:00.606984 | orchestrator | Wednesday 18 February 2026 05:12:13 +0000 (0:00:01.914) 0:02:30.303 **** 2026-02-18 05:13:00.606995 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:13:00.607006 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:13:00.607017 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:13:00.607028 | orchestrator | 2026-02-18 05:13:00.607039 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-18 05:13:00.607051 | orchestrator | Wednesday 18 February 2026 05:12:15 +0000 (0:00:01.739) 0:02:32.043 **** 2026-02-18 05:13:00.607062 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:13:00.607137 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:13:00.607150 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:13:00.607161 | orchestrator | 2026-02-18 05:13:00.607172 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-18 05:13:00.607184 | orchestrator | Wednesday 18 February 2026 05:12:16 +0000 (0:00:01.312) 0:02:33.356 **** 2026-02-18 05:13:00.607195 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:13:00.607206 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:13:00.607217 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:13:00.607228 | orchestrator | 2026-02-18 05:13:00.607240 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-18 05:13:00.607251 | orchestrator | Wednesday 18 February 2026 05:12:18 +0000 (0:00:01.638) 0:02:34.995 **** 2026-02-18 05:13:00.607288 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:13:00.607307 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:13:00.607320 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:13:00.607333 | orchestrator | 2026-02-18 05:13:00.607345 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-18 05:13:00.607358 | orchestrator | Wednesday 18 February 2026 05:12:20 +0000 (0:00:02.034) 0:02:37.030 **** 2026-02-18 05:13:00.607371 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:13:00.607383 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:13:00.607396 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:13:00.607409 | orchestrator | 2026-02-18 05:13:00.607421 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-18 05:13:00.607434 | orchestrator | Wednesday 18 February 2026 05:12:22 +0000 (0:00:01.849) 0:02:38.880 **** 2026-02-18 05:13:00.607447 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:13:00.607460 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:13:00.607472 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:13:00.607485 | orchestrator | 2026-02-18 05:13:00.607498 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-18 05:13:00.607511 | orchestrator | Wednesday 18 February 2026 05:12:24 +0000 (0:00:01.864) 0:02:40.744 **** 2026-02-18 05:13:00.607523 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:13:00.607535 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:13:00.607548 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:13:00.607561 | orchestrator | 2026-02-18 05:13:00.607573 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-18 05:13:00.607586 | orchestrator | Wednesday 18 February 2026 05:12:25 +0000 (0:00:01.365) 0:02:42.110 **** 2026-02-18 05:13:00.607599 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:13:00.607611 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:13:00.607624 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:13:00.607636 | orchestrator | 2026-02-18 05:13:00.607650 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-18 05:13:00.607663 | orchestrator | Wednesday 18 February 2026 05:12:26 +0000 (0:00:01.310) 0:02:43.420 **** 2026-02-18 05:13:00.607676 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:13:00.607687 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:13:00.607698 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:13:00.607709 | orchestrator | 2026-02-18 05:13:00.607719 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-18 05:13:00.607730 | orchestrator | Wednesday 18 February 2026 05:12:28 +0000 (0:00:01.634) 0:02:45.055 **** 2026-02-18 05:13:00.607741 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:13:00.607752 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:13:00.607763 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:13:00.607773 | orchestrator | 2026-02-18 05:13:00.607786 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-18 05:13:00.607798 | orchestrator | Wednesday 18 February 2026 05:12:30 +0000 (0:00:02.025) 0:02:47.080 **** 2026-02-18 05:13:00.607809 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-18 05:13:00.607821 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-18 05:13:00.607831 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-18 05:13:00.607842 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-18 05:13:00.607853 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-18 05:13:00.607864 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-18 05:13:00.607876 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-18 05:13:00.607895 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-18 05:13:00.607923 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-18 05:13:00.607935 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-18 05:13:00.607946 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-18 05:13:00.607957 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-18 05:13:00.607967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-18 05:13:00.607978 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-18 05:13:00.607988 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-18 05:13:00.607999 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-18 05:13:00.608010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-18 05:13:00.608020 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-18 05:13:00.608031 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-18 05:13:00.608041 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-18 05:13:00.608052 | orchestrator | 2026-02-18 05:13:00.608063 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-18 05:13:00.608091 | orchestrator | 2026-02-18 05:13:00.608103 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-18 05:13:00.608114 | orchestrator | Wednesday 18 February 2026 05:12:35 +0000 (0:00:04.392) 0:02:51.473 **** 2026-02-18 05:13:00.608125 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:13:00.608136 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:13:00.608146 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:13:00.608157 | orchestrator | 2026-02-18 05:13:00.608168 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-18 05:13:00.608178 | orchestrator | Wednesday 18 February 2026 05:12:36 +0000 (0:00:01.439) 0:02:52.912 **** 2026-02-18 05:13:00.608189 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:13:00.608200 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:13:00.608210 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:13:00.608221 | orchestrator | 2026-02-18 05:13:00.608232 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-18 05:13:00.608243 | orchestrator | Wednesday 18 February 2026 05:12:38 +0000 (0:00:01.690) 0:02:54.603 **** 2026-02-18 05:13:00.608253 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:13:00.608264 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:13:00.608275 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:13:00.608286 | orchestrator | 2026-02-18 05:13:00.608297 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-18 05:13:00.608308 | orchestrator | Wednesday 18 February 2026 05:12:39 +0000 (0:00:01.646) 0:02:56.249 **** 2026-02-18 05:13:00.608318 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:13:00.608330 | orchestrator | 2026-02-18 05:13:00.608340 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-18 05:13:00.608351 | orchestrator | Wednesday 18 February 2026 05:12:41 +0000 (0:00:01.730) 0:02:57.980 **** 2026-02-18 05:13:00.608362 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:13:00.608373 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:13:00.608384 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:13:00.608394 | orchestrator | 2026-02-18 05:13:00.608405 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-18 05:13:00.608423 | orchestrator | Wednesday 18 February 2026 05:12:43 +0000 (0:00:01.594) 0:02:59.574 **** 2026-02-18 05:13:00.608434 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:13:00.608445 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:13:00.608456 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:13:00.608466 | orchestrator | 2026-02-18 05:13:00.608477 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-18 05:13:00.608488 | orchestrator | Wednesday 18 February 2026 05:12:44 +0000 (0:00:01.387) 0:03:00.961 **** 2026-02-18 05:13:00.608498 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:13:00.608509 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:13:00.608520 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:13:00.608531 | orchestrator | 2026-02-18 05:13:00.608541 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-18 05:13:00.608552 | orchestrator | Wednesday 18 February 2026 05:12:45 +0000 (0:00:01.409) 0:03:02.371 **** 2026-02-18 05:13:00.608563 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:13:00.608574 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:13:00.608584 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:13:00.608595 | orchestrator | 2026-02-18 05:13:00.608606 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-18 05:13:00.608617 | orchestrator | Wednesday 18 February 2026 05:12:47 +0000 (0:00:01.785) 0:03:04.156 **** 2026-02-18 05:13:00.608627 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:13:00.608638 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:13:00.608654 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:13:00.608665 | orchestrator | 2026-02-18 05:13:00.608676 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-18 05:13:00.608687 | orchestrator | Wednesday 18 February 2026 05:12:50 +0000 (0:00:02.443) 0:03:06.600 **** 2026-02-18 05:13:00.608697 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:13:00.608708 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:13:00.608719 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:13:00.608730 | orchestrator | 2026-02-18 05:13:00.608741 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-18 05:13:00.608751 | orchestrator | Wednesday 18 February 2026 05:12:52 +0000 (0:00:02.277) 0:03:08.878 **** 2026-02-18 05:13:00.608769 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:14:08.658505 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:14:08.658643 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:14:08.658668 | orchestrator | 2026-02-18 05:14:08.658687 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-18 05:14:08.658706 | orchestrator | 2026-02-18 05:14:08.658742 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-18 05:14:08.658761 | orchestrator | Wednesday 18 February 2026 05:13:00 +0000 (0:00:08.160) 0:03:17.038 **** 2026-02-18 05:14:08.658778 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.658792 | orchestrator | 2026-02-18 05:14:08.658802 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-18 05:14:08.658813 | orchestrator | Wednesday 18 February 2026 05:13:02 +0000 (0:00:02.113) 0:03:19.152 **** 2026-02-18 05:14:08.658823 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.658833 | orchestrator | 2026-02-18 05:14:08.658842 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-18 05:14:08.658852 | orchestrator | Wednesday 18 February 2026 05:13:04 +0000 (0:00:01.458) 0:03:20.611 **** 2026-02-18 05:14:08.658862 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-18 05:14:08.658872 | orchestrator | 2026-02-18 05:14:08.658882 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-18 05:14:08.658892 | orchestrator | Wednesday 18 February 2026 05:13:05 +0000 (0:00:01.640) 0:03:22.251 **** 2026-02-18 05:14:08.658902 | orchestrator | changed: [testbed-manager] 2026-02-18 05:14:08.658912 | orchestrator | 2026-02-18 05:14:08.658921 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-18 05:14:08.658956 | orchestrator | Wednesday 18 February 2026 05:13:07 +0000 (0:00:01.918) 0:03:24.170 **** 2026-02-18 05:14:08.658966 | orchestrator | changed: [testbed-manager] 2026-02-18 05:14:08.658976 | orchestrator | 2026-02-18 05:14:08.658986 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-18 05:14:08.658995 | orchestrator | Wednesday 18 February 2026 05:13:09 +0000 (0:00:01.602) 0:03:25.773 **** 2026-02-18 05:14:08.659019 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-18 05:14:08.659032 | orchestrator | 2026-02-18 05:14:08.659043 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-18 05:14:08.659055 | orchestrator | Wednesday 18 February 2026 05:13:12 +0000 (0:00:03.006) 0:03:28.780 **** 2026-02-18 05:14:08.659070 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-18 05:14:08.659113 | orchestrator | 2026-02-18 05:14:08.659136 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-18 05:14:08.659152 | orchestrator | Wednesday 18 February 2026 05:13:14 +0000 (0:00:01.870) 0:03:30.650 **** 2026-02-18 05:14:08.659167 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659183 | orchestrator | 2026-02-18 05:14:08.659197 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-18 05:14:08.659214 | orchestrator | Wednesday 18 February 2026 05:13:15 +0000 (0:00:01.472) 0:03:32.122 **** 2026-02-18 05:14:08.659229 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659246 | orchestrator | 2026-02-18 05:14:08.659263 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-18 05:14:08.659279 | orchestrator | 2026-02-18 05:14:08.659299 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-18 05:14:08.659315 | orchestrator | Wednesday 18 February 2026 05:13:17 +0000 (0:00:01.673) 0:03:33.796 **** 2026-02-18 05:14:08.659330 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659340 | orchestrator | 2026-02-18 05:14:08.659350 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-18 05:14:08.659359 | orchestrator | Wednesday 18 February 2026 05:13:18 +0000 (0:00:01.124) 0:03:34.920 **** 2026-02-18 05:14:08.659369 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 05:14:08.659379 | orchestrator | 2026-02-18 05:14:08.659392 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-18 05:14:08.659408 | orchestrator | Wednesday 18 February 2026 05:13:19 +0000 (0:00:01.453) 0:03:36.373 **** 2026-02-18 05:14:08.659424 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659439 | orchestrator | 2026-02-18 05:14:08.659453 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-18 05:14:08.659469 | orchestrator | Wednesday 18 February 2026 05:13:21 +0000 (0:00:01.877) 0:03:38.251 **** 2026-02-18 05:14:08.659486 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659503 | orchestrator | 2026-02-18 05:14:08.659520 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-18 05:14:08.659535 | orchestrator | Wednesday 18 February 2026 05:13:24 +0000 (0:00:02.826) 0:03:41.077 **** 2026-02-18 05:14:08.659549 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659558 | orchestrator | 2026-02-18 05:14:08.659568 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-18 05:14:08.659577 | orchestrator | Wednesday 18 February 2026 05:13:26 +0000 (0:00:01.454) 0:03:42.531 **** 2026-02-18 05:14:08.659587 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659597 | orchestrator | 2026-02-18 05:14:08.659606 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-18 05:14:08.659616 | orchestrator | Wednesday 18 February 2026 05:13:27 +0000 (0:00:01.476) 0:03:44.008 **** 2026-02-18 05:14:08.659626 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659635 | orchestrator | 2026-02-18 05:14:08.659649 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-18 05:14:08.659665 | orchestrator | Wednesday 18 February 2026 05:13:29 +0000 (0:00:01.673) 0:03:45.682 **** 2026-02-18 05:14:08.659694 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659709 | orchestrator | 2026-02-18 05:14:08.659726 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-18 05:14:08.659741 | orchestrator | Wednesday 18 February 2026 05:13:31 +0000 (0:00:02.606) 0:03:48.288 **** 2026-02-18 05:14:08.659758 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:08.659774 | orchestrator | 2026-02-18 05:14:08.659790 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-18 05:14:08.659807 | orchestrator | 2026-02-18 05:14:08.659817 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-18 05:14:08.659846 | orchestrator | Wednesday 18 February 2026 05:13:33 +0000 (0:00:01.685) 0:03:49.974 **** 2026-02-18 05:14:08.659856 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:14:08.659866 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:14:08.659876 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:14:08.659886 | orchestrator | 2026-02-18 05:14:08.659895 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-18 05:14:08.659905 | orchestrator | Wednesday 18 February 2026 05:13:34 +0000 (0:00:01.390) 0:03:51.364 **** 2026-02-18 05:14:08.659915 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:14:08.659925 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:14:08.659936 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:14:08.659951 | orchestrator | 2026-02-18 05:14:08.659967 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-18 05:14:08.659982 | orchestrator | Wednesday 18 February 2026 05:13:36 +0000 (0:00:01.682) 0:03:53.047 **** 2026-02-18 05:14:08.659998 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:14:08.660014 | orchestrator | 2026-02-18 05:14:08.660024 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-18 05:14:08.660034 | orchestrator | Wednesday 18 February 2026 05:13:38 +0000 (0:00:01.733) 0:03:54.780 **** 2026-02-18 05:14:08.660043 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660052 | orchestrator | 2026-02-18 05:14:08.660062 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-18 05:14:08.660071 | orchestrator | Wednesday 18 February 2026 05:13:40 +0000 (0:00:01.916) 0:03:56.697 **** 2026-02-18 05:14:08.660081 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660121 | orchestrator | 2026-02-18 05:14:08.660131 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-18 05:14:08.660141 | orchestrator | Wednesday 18 February 2026 05:13:42 +0000 (0:00:01.895) 0:03:58.593 **** 2026-02-18 05:14:08.660150 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:14:08.660160 | orchestrator | 2026-02-18 05:14:08.660170 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-18 05:14:08.660180 | orchestrator | Wednesday 18 February 2026 05:13:43 +0000 (0:00:01.126) 0:03:59.719 **** 2026-02-18 05:14:08.660189 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660199 | orchestrator | 2026-02-18 05:14:08.660208 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-18 05:14:08.660218 | orchestrator | Wednesday 18 February 2026 05:13:45 +0000 (0:00:02.051) 0:04:01.770 **** 2026-02-18 05:14:08.660228 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660238 | orchestrator | 2026-02-18 05:14:08.660247 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-18 05:14:08.660257 | orchestrator | Wednesday 18 February 2026 05:13:47 +0000 (0:00:02.298) 0:04:04.069 **** 2026-02-18 05:14:08.660267 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660276 | orchestrator | 2026-02-18 05:14:08.660286 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-18 05:14:08.660296 | orchestrator | Wednesday 18 February 2026 05:13:48 +0000 (0:00:01.138) 0:04:05.207 **** 2026-02-18 05:14:08.660305 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660325 | orchestrator | 2026-02-18 05:14:08.660334 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-18 05:14:08.660344 | orchestrator | Wednesday 18 February 2026 05:13:49 +0000 (0:00:01.184) 0:04:06.392 **** 2026-02-18 05:14:08.660354 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-18 05:14:08.660363 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-18 05:14:08.660374 | orchestrator | } 2026-02-18 05:14:08.660384 | orchestrator | 2026-02-18 05:14:08.660394 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-18 05:14:08.660403 | orchestrator | Wednesday 18 February 2026 05:13:51 +0000 (0:00:01.214) 0:04:07.606 **** 2026-02-18 05:14:08.660417 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:14:08.660439 | orchestrator | 2026-02-18 05:14:08.660460 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-18 05:14:08.660476 | orchestrator | Wednesday 18 February 2026 05:13:52 +0000 (0:00:01.120) 0:04:08.726 **** 2026-02-18 05:14:08.660491 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-18 05:14:08.660506 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-18 05:14:08.660523 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-18 05:14:08.660539 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-18 05:14:08.660556 | orchestrator | 2026-02-18 05:14:08.660573 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-18 05:14:08.660588 | orchestrator | Wednesday 18 February 2026 05:13:57 +0000 (0:00:05.566) 0:04:14.293 **** 2026-02-18 05:14:08.660604 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660620 | orchestrator | 2026-02-18 05:14:08.660634 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-18 05:14:08.660644 | orchestrator | Wednesday 18 February 2026 05:14:00 +0000 (0:00:02.599) 0:04:16.893 **** 2026-02-18 05:14:08.660653 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660663 | orchestrator | 2026-02-18 05:14:08.660680 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-18 05:14:08.660696 | orchestrator | Wednesday 18 February 2026 05:14:03 +0000 (0:00:02.829) 0:04:19.722 **** 2026-02-18 05:14:08.660712 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-18 05:14:08.660728 | orchestrator | 2026-02-18 05:14:08.660743 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-18 05:14:08.660761 | orchestrator | Wednesday 18 February 2026 05:14:07 +0000 (0:00:04.210) 0:04:23.933 **** 2026-02-18 05:14:08.660779 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:14:08.660795 | orchestrator | 2026-02-18 05:14:08.660839 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-18 05:14:39.911196 | orchestrator | Wednesday 18 February 2026 05:14:08 +0000 (0:00:01.148) 0:04:25.081 **** 2026-02-18 05:14:39.911316 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-18 05:14:39.911333 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-18 05:14:39.911344 | orchestrator | 2026-02-18 05:14:39.911357 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-18 05:14:39.911368 | orchestrator | Wednesday 18 February 2026 05:14:11 +0000 (0:00:02.986) 0:04:28.068 **** 2026-02-18 05:14:39.911379 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:14:39.911392 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:14:39.911403 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:14:39.911414 | orchestrator | 2026-02-18 05:14:39.911425 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-18 05:14:39.911436 | orchestrator | Wednesday 18 February 2026 05:14:13 +0000 (0:00:01.378) 0:04:29.447 **** 2026-02-18 05:14:39.911447 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:14:39.911459 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:14:39.911493 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:14:39.911505 | orchestrator | 2026-02-18 05:14:39.911516 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-18 05:14:39.911527 | orchestrator | 2026-02-18 05:14:39.911538 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-18 05:14:39.911549 | orchestrator | Wednesday 18 February 2026 05:14:15 +0000 (0:00:02.038) 0:04:31.485 **** 2026-02-18 05:14:39.911560 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:39.911571 | orchestrator | 2026-02-18 05:14:39.911583 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-18 05:14:39.911594 | orchestrator | Wednesday 18 February 2026 05:14:16 +0000 (0:00:01.138) 0:04:32.623 **** 2026-02-18 05:14:39.911621 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-18 05:14:39.911633 | orchestrator | 2026-02-18 05:14:39.911644 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-18 05:14:39.911655 | orchestrator | Wednesday 18 February 2026 05:14:17 +0000 (0:00:01.598) 0:04:34.222 **** 2026-02-18 05:14:39.911666 | orchestrator | ok: [testbed-manager] 2026-02-18 05:14:39.911678 | orchestrator | 2026-02-18 05:14:39.911691 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-18 05:14:39.911703 | orchestrator | 2026-02-18 05:14:39.911716 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-18 05:14:39.911728 | orchestrator | Wednesday 18 February 2026 05:14:23 +0000 (0:00:05.774) 0:04:39.996 **** 2026-02-18 05:14:39.911740 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:14:39.911753 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:14:39.911765 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:14:39.911777 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:14:39.911790 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:14:39.911802 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:14:39.911814 | orchestrator | 2026-02-18 05:14:39.911827 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-18 05:14:39.911840 | orchestrator | Wednesday 18 February 2026 05:14:25 +0000 (0:00:01.957) 0:04:41.954 **** 2026-02-18 05:14:39.911852 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-18 05:14:39.911865 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-18 05:14:39.911877 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-18 05:14:39.911890 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-18 05:14:39.911902 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-18 05:14:39.911915 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-18 05:14:39.911927 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-18 05:14:39.911939 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-18 05:14:39.911952 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-18 05:14:39.911964 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-18 05:14:39.911976 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-18 05:14:39.911989 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-18 05:14:39.912001 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-18 05:14:39.912013 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-18 05:14:39.912025 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-18 05:14:39.912037 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-18 05:14:39.912056 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-18 05:14:39.912067 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-18 05:14:39.912078 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-18 05:14:39.912088 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-18 05:14:39.912121 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-18 05:14:39.912149 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-18 05:14:39.912161 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-18 05:14:39.912172 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-18 05:14:39.912182 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-18 05:14:39.912193 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-18 05:14:39.912203 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-18 05:14:39.912214 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-18 05:14:39.912224 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-18 05:14:39.912235 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-18 05:14:39.912246 | orchestrator | 2026-02-18 05:14:39.912256 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-18 05:14:39.912267 | orchestrator | Wednesday 18 February 2026 05:14:35 +0000 (0:00:09.956) 0:04:51.910 **** 2026-02-18 05:14:39.912278 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:14:39.912289 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:14:39.912299 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:14:39.912310 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:14:39.912322 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:14:39.912333 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:14:39.912343 | orchestrator | 2026-02-18 05:14:39.912354 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-18 05:14:39.912370 | orchestrator | Wednesday 18 February 2026 05:14:37 +0000 (0:00:01.854) 0:04:53.765 **** 2026-02-18 05:14:39.912381 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:14:39.912392 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:14:39.912402 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:14:39.912413 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:14:39.912424 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:14:39.912434 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:14:39.912445 | orchestrator | 2026-02-18 05:14:39.912456 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:14:39.912467 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 05:14:39.912479 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-18 05:14:39.912491 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-18 05:14:39.912501 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-18 05:14:39.912512 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 05:14:39.912530 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 05:14:39.912541 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-18 05:14:39.912551 | orchestrator | 2026-02-18 05:14:39.912562 | orchestrator | 2026-02-18 05:14:39.912573 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:14:39.912583 | orchestrator | Wednesday 18 February 2026 05:14:39 +0000 (0:00:02.549) 0:04:56.315 **** 2026-02-18 05:14:39.912594 | orchestrator | =============================================================================== 2026-02-18 05:14:39.912605 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 34.00s 2026-02-18 05:14:39.912616 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.04s 2026-02-18 05:14:39.912626 | orchestrator | Manage labels ----------------------------------------------------------- 9.96s 2026-02-18 05:14:39.912637 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.16s 2026-02-18 05:14:39.912648 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.77s 2026-02-18 05:14:39.912658 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.57s 2026-02-18 05:14:39.912669 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 5.07s 2026-02-18 05:14:39.912680 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.39s 2026-02-18 05:14:39.912690 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.21s 2026-02-18 05:14:39.912701 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.01s 2026-02-18 05:14:39.912712 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.99s 2026-02-18 05:14:39.912722 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.83s 2026-02-18 05:14:39.912740 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.83s 2026-02-18 05:14:40.389036 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.74s 2026-02-18 05:14:40.389157 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.66s 2026-02-18 05:14:40.389169 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.64s 2026-02-18 05:14:40.389177 | orchestrator | kubectl : Install required packages ------------------------------------- 2.61s 2026-02-18 05:14:40.389185 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.60s 2026-02-18 05:14:40.389193 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.59s 2026-02-18 05:14:40.389201 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.58s 2026-02-18 05:14:40.740295 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-18 05:14:40.740389 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-18 05:14:40.746198 | orchestrator | + set -e 2026-02-18 05:14:40.746241 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 05:14:40.746254 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 05:14:40.746266 | orchestrator | ++ INTERACTIVE=false 2026-02-18 05:14:40.746278 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 05:14:40.746289 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 05:14:40.746300 | orchestrator | + osism apply openstackclient 2026-02-18 05:14:52.814661 | orchestrator | 2026-02-18 05:14:52 | INFO  | Task a5582a59-b80e-4d0d-ad4c-bc9d624570da (openstackclient) was prepared for execution. 2026-02-18 05:14:52.814778 | orchestrator | 2026-02-18 05:14:52 | INFO  | It takes a moment until task a5582a59-b80e-4d0d-ad4c-bc9d624570da (openstackclient) has been started and output is visible here. 2026-02-18 05:15:20.096727 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-18 05:15:20.096890 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-18 05:15:20.096940 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-18 05:15:20.096952 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-18 05:15:20.096975 | orchestrator | 2026-02-18 05:15:20.096987 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-18 05:15:20.096998 | orchestrator | 2026-02-18 05:15:20.097009 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-18 05:15:20.097022 | orchestrator | Wednesday 18 February 2026 05:14:59 +0000 (0:00:01.766) 0:00:01.766 **** 2026-02-18 05:15:20.097043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-18 05:15:20.097062 | orchestrator | 2026-02-18 05:15:20.097081 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-18 05:15:20.097126 | orchestrator | Wednesday 18 February 2026 05:15:00 +0000 (0:00:01.032) 0:00:02.799 **** 2026-02-18 05:15:20.097146 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-18 05:15:20.097166 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-18 05:15:20.097184 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-18 05:15:20.097203 | orchestrator | 2026-02-18 05:15:20.097215 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-18 05:15:20.097225 | orchestrator | Wednesday 18 February 2026 05:15:01 +0000 (0:00:01.386) 0:00:04.185 **** 2026-02-18 05:15:20.097236 | orchestrator | changed: [testbed-manager] 2026-02-18 05:15:20.097247 | orchestrator | 2026-02-18 05:15:20.097258 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-18 05:15:20.097269 | orchestrator | Wednesday 18 February 2026 05:15:03 +0000 (0:00:01.299) 0:00:05.484 **** 2026-02-18 05:15:20.097280 | orchestrator | ok: [testbed-manager] 2026-02-18 05:15:20.097291 | orchestrator | 2026-02-18 05:15:20.097302 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-18 05:15:20.097313 | orchestrator | Wednesday 18 February 2026 05:15:04 +0000 (0:00:01.111) 0:00:06.595 **** 2026-02-18 05:15:20.097324 | orchestrator | ok: [testbed-manager] 2026-02-18 05:15:20.097335 | orchestrator | 2026-02-18 05:15:20.097346 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-18 05:15:20.097357 | orchestrator | Wednesday 18 February 2026 05:15:05 +0000 (0:00:01.025) 0:00:07.621 **** 2026-02-18 05:15:20.097367 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-18 05:15:20.097378 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-18 05:15:20.097399 | orchestrator | ok: [testbed-manager] 2026-02-18 05:15:20.097410 | orchestrator | 2026-02-18 05:15:20.097421 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-18 05:15:20.097440 | orchestrator | Wednesday 18 February 2026 05:15:05 +0000 (0:00:00.706) 0:00:08.327 **** 2026-02-18 05:15:20.097458 | orchestrator | changed: [testbed-manager] 2026-02-18 05:15:20.097475 | orchestrator | 2026-02-18 05:15:20.097494 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-18 05:15:20.097513 | orchestrator | Wednesday 18 February 2026 05:15:16 +0000 (0:00:10.617) 0:00:18.944 **** 2026-02-18 05:15:20.097533 | orchestrator | changed: [testbed-manager] 2026-02-18 05:15:20.097551 | orchestrator | 2026-02-18 05:15:20.097567 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-18 05:15:20.097589 | orchestrator | Wednesday 18 February 2026 05:15:17 +0000 (0:00:01.344) 0:00:20.289 **** 2026-02-18 05:15:20.097600 | orchestrator | changed: [testbed-manager] 2026-02-18 05:15:20.097611 | orchestrator | 2026-02-18 05:15:20.097621 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-18 05:15:20.097632 | orchestrator | Wednesday 18 February 2026 05:15:18 +0000 (0:00:00.615) 0:00:20.904 **** 2026-02-18 05:15:20.097643 | orchestrator | ok: [testbed-manager] 2026-02-18 05:15:20.097654 | orchestrator | 2026-02-18 05:15:20.097664 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:15:20.097676 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-18 05:15:20.097687 | orchestrator | 2026-02-18 05:15:20.097698 | orchestrator | 2026-02-18 05:15:20.097709 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:15:20.097720 | orchestrator | Wednesday 18 February 2026 05:15:19 +0000 (0:00:01.217) 0:00:22.122 **** 2026-02-18 05:15:20.097731 | orchestrator | =============================================================================== 2026-02-18 05:15:20.097742 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.62s 2026-02-18 05:15:20.097753 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.39s 2026-02-18 05:15:20.097764 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.34s 2026-02-18 05:15:20.097775 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.30s 2026-02-18 05:15:20.097785 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.22s 2026-02-18 05:15:20.097804 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.11s 2026-02-18 05:15:20.097846 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.03s 2026-02-18 05:15:20.097875 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.03s 2026-02-18 05:15:20.097896 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.71s 2026-02-18 05:15:20.097908 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2026-02-18 05:15:20.439047 | orchestrator | + osism apply -a upgrade common 2026-02-18 05:15:22.479028 | orchestrator | 2026-02-18 05:15:22 | INFO  | Task c60e2c97-a408-42ca-a8c1-5111f1e29d70 (common) was prepared for execution. 2026-02-18 05:15:22.479182 | orchestrator | 2026-02-18 05:15:22 | INFO  | It takes a moment until task c60e2c97-a408-42ca-a8c1-5111f1e29d70 (common) has been started and output is visible here. 2026-02-18 05:15:42.652388 | orchestrator | 2026-02-18 05:15:42.652504 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-18 05:15:42.652522 | orchestrator | 2026-02-18 05:15:42.652534 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-18 05:15:42.652545 | orchestrator | Wednesday 18 February 2026 05:15:29 +0000 (0:00:02.275) 0:00:02.275 **** 2026-02-18 05:15:42.652557 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:15:42.652570 | orchestrator | 2026-02-18 05:15:42.652582 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-18 05:15:42.652594 | orchestrator | Wednesday 18 February 2026 05:15:33 +0000 (0:00:04.066) 0:00:06.341 **** 2026-02-18 05:15:42.652605 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:15:42.652617 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:15:42.652628 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:15:42.652639 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:15:42.652651 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:15:42.652687 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:15:42.652699 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:15:42.652710 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:15:42.652720 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:15:42.652731 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:15:42.652742 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:15:42.652753 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:15:42.652764 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:15:42.652775 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:15:42.652786 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:15:42.652797 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:15:42.652808 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:15:42.652819 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:15:42.652830 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:15:42.652840 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:15:42.652851 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:15:42.652862 | orchestrator | 2026-02-18 05:15:42.652873 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-18 05:15:42.652884 | orchestrator | Wednesday 18 February 2026 05:15:37 +0000 (0:00:03.917) 0:00:10.258 **** 2026-02-18 05:15:42.652895 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:15:42.652908 | orchestrator | 2026-02-18 05:15:42.652921 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-18 05:15:42.652933 | orchestrator | Wednesday 18 February 2026 05:15:40 +0000 (0:00:02.958) 0:00:13.217 **** 2026-02-18 05:15:42.652950 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:15:42.652983 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:15:42.653024 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:15:42.653047 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:15:42.653060 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:15:42.653073 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:15:42.653341 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:42.653362 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:15:42.653374 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:42.653404 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.633865 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.633966 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.633991 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634004 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634066 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634079 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634091 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634215 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634233 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634250 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634261 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:15:45.634273 | orchestrator | 2026-02-18 05:15:45.634286 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-18 05:15:45.634297 | orchestrator | Wednesday 18 February 2026 05:15:44 +0000 (0:00:04.825) 0:00:18.042 **** 2026-02-18 05:15:45.634311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:45.634324 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:45.634335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:45.634347 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:45.634377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.990935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:47.991008 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:15:47.991020 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:47.991071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991137 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:15:47.991150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:47.991179 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:15:47.991187 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:15:47.991200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:47.991217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991234 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:15:47.991242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:47.991256 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:15:47.991264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:47.991278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.401879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.401962 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:15:49.401978 | orchestrator | 2026-02-18 05:15:49.401991 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-18 05:15:49.402077 | orchestrator | Wednesday 18 February 2026 05:15:47 +0000 (0:00:03.115) 0:00:21.157 **** 2026-02-18 05:15:49.402161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:49.402183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:49.402202 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:49.402287 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402350 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:15:49.402362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:49.402385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:15:49.402455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:15:49.402486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:03.420211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:03.420324 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:16:03.420355 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:16:03.420366 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:16:03.420379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:03.420392 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:16:03.420403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:03.420437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:03.420448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:03.420458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:03.420468 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:16:03.420478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:03.420488 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:16:03.420498 | orchestrator | 2026-02-18 05:16:03.420509 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-18 05:16:03.420521 | orchestrator | Wednesday 18 February 2026 05:15:51 +0000 (0:00:03.323) 0:00:24.481 **** 2026-02-18 05:16:03.420531 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:16:03.420556 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:16:03.420566 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:16:03.420575 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:16:03.420585 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:16:03.420595 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:16:03.420604 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:16:03.420614 | orchestrator | 2026-02-18 05:16:03.420624 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-18 05:16:03.420634 | orchestrator | Wednesday 18 February 2026 05:15:53 +0000 (0:00:02.120) 0:00:26.602 **** 2026-02-18 05:16:03.420643 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:16:03.420653 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:16:03.420663 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:16:03.420677 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:16:03.420687 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:16:03.420696 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:16:03.420706 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:16:03.420715 | orchestrator | 2026-02-18 05:16:03.420725 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-18 05:16:03.420743 | orchestrator | Wednesday 18 February 2026 05:15:55 +0000 (0:00:02.025) 0:00:28.627 **** 2026-02-18 05:16:03.420755 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:16:03.420765 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:16:03.420777 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:16:03.420788 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:16:03.420799 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:16:03.420810 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:16:03.420821 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:16:03.420832 | orchestrator | 2026-02-18 05:16:03.420842 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-18 05:16:03.420854 | orchestrator | Wednesday 18 February 2026 05:15:57 +0000 (0:00:01.942) 0:00:30.570 **** 2026-02-18 05:16:03.420865 | orchestrator | changed: [testbed-manager] 2026-02-18 05:16:03.420877 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:16:03.420888 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:16:03.420899 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:16:03.420910 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:16:03.420921 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:16:03.420932 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:16:03.420943 | orchestrator | 2026-02-18 05:16:03.420954 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-18 05:16:03.420966 | orchestrator | Wednesday 18 February 2026 05:16:00 +0000 (0:00:03.129) 0:00:33.699 **** 2026-02-18 05:16:03.420978 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:03.420990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:03.421003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:03.421077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:03.421098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:05.386423 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:05.386543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:05.386570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386664 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:05.386828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:26.418434 | orchestrator | 2026-02-18 05:16:26.418549 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-18 05:16:26.418568 | orchestrator | Wednesday 18 February 2026 05:16:05 +0000 (0:00:04.861) 0:00:38.560 **** 2026-02-18 05:16:26.418581 | orchestrator | [WARNING]: Skipped 2026-02-18 05:16:26.418594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-18 05:16:26.418607 | orchestrator | to this access issue: 2026-02-18 05:16:26.418618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-18 05:16:26.418629 | orchestrator | directory 2026-02-18 05:16:26.418640 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:16:26.418651 | orchestrator | 2026-02-18 05:16:26.418662 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-18 05:16:26.418673 | orchestrator | Wednesday 18 February 2026 05:16:07 +0000 (0:00:02.327) 0:00:40.888 **** 2026-02-18 05:16:26.418684 | orchestrator | [WARNING]: Skipped 2026-02-18 05:16:26.418695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-18 05:16:26.418706 | orchestrator | to this access issue: 2026-02-18 05:16:26.418716 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-18 05:16:26.418727 | orchestrator | directory 2026-02-18 05:16:26.418738 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:16:26.418748 | orchestrator | 2026-02-18 05:16:26.418759 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-18 05:16:26.418770 | orchestrator | Wednesday 18 February 2026 05:16:09 +0000 (0:00:01.951) 0:00:42.840 **** 2026-02-18 05:16:26.418781 | orchestrator | [WARNING]: Skipped 2026-02-18 05:16:26.418792 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-18 05:16:26.418803 | orchestrator | to this access issue: 2026-02-18 05:16:26.418813 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-18 05:16:26.418824 | orchestrator | directory 2026-02-18 05:16:26.418835 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:16:26.418845 | orchestrator | 2026-02-18 05:16:26.418856 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-18 05:16:26.418867 | orchestrator | Wednesday 18 February 2026 05:16:11 +0000 (0:00:01.946) 0:00:44.786 **** 2026-02-18 05:16:26.418957 | orchestrator | [WARNING]: Skipped 2026-02-18 05:16:26.418974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-18 05:16:26.418987 | orchestrator | to this access issue: 2026-02-18 05:16:26.419000 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-18 05:16:26.419012 | orchestrator | directory 2026-02-18 05:16:26.419025 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:16:26.419037 | orchestrator | 2026-02-18 05:16:26.419049 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-18 05:16:26.419086 | orchestrator | Wednesday 18 February 2026 05:16:13 +0000 (0:00:01.915) 0:00:46.702 **** 2026-02-18 05:16:26.419098 | orchestrator | changed: [testbed-manager] 2026-02-18 05:16:26.419111 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:16:26.419123 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:16:26.419135 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:16:26.419146 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:16:26.419159 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:16:26.419171 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:16:26.419183 | orchestrator | 2026-02-18 05:16:26.419196 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-18 05:16:26.419209 | orchestrator | Wednesday 18 February 2026 05:16:17 +0000 (0:00:04.107) 0:00:50.810 **** 2026-02-18 05:16:26.419221 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:16:26.419234 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:16:26.419246 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:16:26.419259 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:16:26.419271 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:16:26.419282 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:16:26.419294 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:16:26.419306 | orchestrator | 2026-02-18 05:16:26.419318 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-18 05:16:26.419329 | orchestrator | Wednesday 18 February 2026 05:16:20 +0000 (0:00:03.081) 0:00:53.892 **** 2026-02-18 05:16:26.419339 | orchestrator | ok: [testbed-manager] 2026-02-18 05:16:26.419350 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:16:26.419361 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:16:26.419371 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:16:26.419382 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:16:26.419392 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:16:26.419403 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:16:26.419413 | orchestrator | 2026-02-18 05:16:26.419424 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-18 05:16:26.419435 | orchestrator | Wednesday 18 February 2026 05:16:23 +0000 (0:00:02.739) 0:00:56.631 **** 2026-02-18 05:16:26.419482 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:26.419500 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:26.419513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:26.419534 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:26.419548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:26.419560 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:26.419572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:26.419595 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:32.741576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:32.741697 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:32.741741 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:32.741755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:32.741766 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:32.741778 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:32.741790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:32.741819 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:32.741832 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:32.741944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:32.741960 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:32.741972 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:32.741984 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:32.741995 | orchestrator | 2026-02-18 05:16:32.742008 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-18 05:16:32.742082 | orchestrator | Wednesday 18 February 2026 05:16:26 +0000 (0:00:02.952) 0:00:59.584 **** 2026-02-18 05:16:32.742094 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:16:32.742106 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:16:32.742117 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:16:32.742127 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:16:32.742138 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:16:32.742184 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:16:32.742195 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:16:32.742206 | orchestrator | 2026-02-18 05:16:32.742217 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-18 05:16:32.742243 | orchestrator | Wednesday 18 February 2026 05:16:29 +0000 (0:00:02.944) 0:01:02.528 **** 2026-02-18 05:16:32.742254 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:16:32.742265 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:16:32.742276 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:16:32.742287 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:16:32.742298 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:16:32.742333 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:16:35.089516 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:16:35.089614 | orchestrator | 2026-02-18 05:16:35.089630 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-18 05:16:35.089643 | orchestrator | Wednesday 18 February 2026 05:16:32 +0000 (0:00:03.381) 0:01:05.910 **** 2026-02-18 05:16:35.089656 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:35.089671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:35.089683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:35.089695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:35.089706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:35.089717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:35.089728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:16:35.089796 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:35.089812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:35.089823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:35.089904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:35.089916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:35.089927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:35.089966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676118 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:16:39.676326 | orchestrator | 2026-02-18 05:16:39.676340 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-18 05:16:39.676352 | orchestrator | Wednesday 18 February 2026 05:16:37 +0000 (0:00:04.339) 0:01:10.250 **** 2026-02-18 05:16:39.676365 | orchestrator | changed: [testbed-manager] => { 2026-02-18 05:16:39.676377 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:16:39.676388 | orchestrator | } 2026-02-18 05:16:39.676399 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:16:39.676410 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:16:39.676437 | orchestrator | } 2026-02-18 05:16:39.676448 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:16:39.676459 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:16:39.676469 | orchestrator | } 2026-02-18 05:16:39.676480 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:16:39.676491 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:16:39.676502 | orchestrator | } 2026-02-18 05:16:39.676513 | orchestrator | changed: [testbed-node-3] => { 2026-02-18 05:16:39.676524 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:16:39.676535 | orchestrator | } 2026-02-18 05:16:39.676545 | orchestrator | changed: [testbed-node-4] => { 2026-02-18 05:16:39.676556 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:16:39.676567 | orchestrator | } 2026-02-18 05:16:39.676592 | orchestrator | changed: [testbed-node-5] => { 2026-02-18 05:16:39.676603 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:16:39.676614 | orchestrator | } 2026-02-18 05:16:39.676626 | orchestrator | 2026-02-18 05:16:39.676638 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:16:39.676650 | orchestrator | Wednesday 18 February 2026 05:16:39 +0000 (0:00:02.078) 0:01:12.328 **** 2026-02-18 05:16:39.676683 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:39.676699 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:39.676715 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:39.676728 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:16:39.676741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:39.676774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:39.676788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:39.676801 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:16:39.676837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:39.676865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.170826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.170937 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:16:46.170958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:46.170972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171023 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:16:46.171052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:46.171064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171091 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:16:46.171122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:46.171135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171159 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:16:46.171179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:16:46.171191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:16:46.171214 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:16:46.171225 | orchestrator | 2026-02-18 05:16:46.171238 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:16:46.171251 | orchestrator | Wednesday 18 February 2026 05:16:42 +0000 (0:00:02.995) 0:01:15.324 **** 2026-02-18 05:16:46.171262 | orchestrator | 2026-02-18 05:16:46.171273 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:16:46.171283 | orchestrator | Wednesday 18 February 2026 05:16:42 +0000 (0:00:00.466) 0:01:15.790 **** 2026-02-18 05:16:46.171294 | orchestrator | 2026-02-18 05:16:46.171305 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:16:46.171316 | orchestrator | Wednesday 18 February 2026 05:16:43 +0000 (0:00:00.498) 0:01:16.289 **** 2026-02-18 05:16:46.171327 | orchestrator | 2026-02-18 05:16:46.171342 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:16:46.171353 | orchestrator | Wednesday 18 February 2026 05:16:43 +0000 (0:00:00.459) 0:01:16.748 **** 2026-02-18 05:16:46.171364 | orchestrator | 2026-02-18 05:16:46.171374 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:16:46.171385 | orchestrator | Wednesday 18 February 2026 05:16:43 +0000 (0:00:00.440) 0:01:17.188 **** 2026-02-18 05:16:46.171396 | orchestrator | 2026-02-18 05:16:46.171407 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:16:46.171417 | orchestrator | Wednesday 18 February 2026 05:16:44 +0000 (0:00:00.777) 0:01:17.965 **** 2026-02-18 05:16:46.171428 | orchestrator | 2026-02-18 05:16:46.171439 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:16:46.171450 | orchestrator | Wednesday 18 February 2026 05:16:45 +0000 (0:00:00.420) 0:01:18.385 **** 2026-02-18 05:16:46.171461 | orchestrator | 2026-02-18 05:16:46.171479 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-18 05:16:48.833379 | orchestrator | Wednesday 18 February 2026 05:16:46 +0000 (0:00:00.937) 0:01:19.323 **** 2026-02-18 05:16:48.833483 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_u4elkbn2/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_u4elkbn2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_u4elkbn2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-18 05:16:48.833554 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_e_0yfm00/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_e_0yfm00/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_e_0yfm00/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-18 05:16:48.833578 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_hcspdsyb/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_hcspdsyb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_hcspdsyb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-18 05:16:48.833605 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_b_x5llyo/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_b_x5llyo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_b_x5llyo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-18 05:16:52.208053 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ul570tr7/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ul570tr7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ul570tr7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-18 05:16:52.208220 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_un8yy2va/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_un8yy2va/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_un8yy2va/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-18 05:16:52.208273 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kpvvu4xw/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kpvvu4xw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kpvvu4xw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-18 05:16:52.208290 | orchestrator | 2026-02-18 05:16:52.208308 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:16:52.208325 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-18 05:16:52.208342 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-18 05:16:52.208381 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-18 05:16:52.208398 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-18 05:16:52.208413 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-18 05:16:52.208429 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-18 05:16:52.208489 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-18 05:16:52.208506 | orchestrator | 2026-02-18 05:16:52.208520 | orchestrator | 2026-02-18 05:16:52.208547 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:16:52.694160 | orchestrator | 2026-02-18 05:16:52 | INFO  | Task c358a26c-c904-49ff-9c08-080f09053462 (common) was prepared for execution. 2026-02-18 05:16:52.694280 | orchestrator | 2026-02-18 05:16:52 | INFO  | It takes a moment until task c358a26c-c904-49ff-9c08-080f09053462 (common) has been started and output is visible here. 2026-02-18 05:17:11.472764 | orchestrator | Wednesday 18 February 2026 05:16:52 +0000 (0:00:06.058) 0:01:25.382 **** 2026-02-18 05:17:11.472880 | orchestrator | =============================================================================== 2026-02-18 05:17:11.472896 | orchestrator | common : Restart fluentd container -------------------------------------- 6.06s 2026-02-18 05:17:11.472908 | orchestrator | common : Copying over config.json files for services -------------------- 4.86s 2026-02-18 05:17:11.472937 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.83s 2026-02-18 05:17:11.472960 | orchestrator | service-check-containers : common | Check containers -------------------- 4.34s 2026-02-18 05:17:11.472971 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.11s 2026-02-18 05:17:11.472982 | orchestrator | common : include_tasks -------------------------------------------------- 4.07s 2026-02-18 05:17:11.472993 | orchestrator | common : Flush handlers ------------------------------------------------- 4.00s 2026-02-18 05:17:11.473004 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.92s 2026-02-18 05:17:11.473015 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.38s 2026-02-18 05:17:11.473026 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.32s 2026-02-18 05:17:11.473037 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.13s 2026-02-18 05:17:11.473047 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.12s 2026-02-18 05:17:11.473059 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.08s 2026-02-18 05:17:11.473070 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.00s 2026-02-18 05:17:11.473080 | orchestrator | common : include_tasks -------------------------------------------------- 2.96s 2026-02-18 05:17:11.473091 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.95s 2026-02-18 05:17:11.473102 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.94s 2026-02-18 05:17:11.473113 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.74s 2026-02-18 05:17:11.473123 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.33s 2026-02-18 05:17:11.473134 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.12s 2026-02-18 05:17:11.473145 | orchestrator | 2026-02-18 05:17:11.473158 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-18 05:17:11.473193 | orchestrator | 2026-02-18 05:17:11.473205 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-18 05:17:11.473216 | orchestrator | Wednesday 18 February 2026 05:16:58 +0000 (0:00:01.982) 0:00:01.982 **** 2026-02-18 05:17:11.473242 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:17:11.473255 | orchestrator | 2026-02-18 05:17:11.473269 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-18 05:17:11.473281 | orchestrator | Wednesday 18 February 2026 05:17:02 +0000 (0:00:03.631) 0:00:05.614 **** 2026-02-18 05:17:11.473294 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:17:11.473306 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:17:11.473319 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:17:11.473331 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:17:11.473343 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:17:11.473355 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:17:11.473367 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:17:11.473379 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:17:11.473392 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-18 05:17:11.473404 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:17:11.473417 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:17:11.473429 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:17:11.473441 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:17:11.473453 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:17:11.473465 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:17:11.473478 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-18 05:17:11.473490 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:17:11.473502 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:17:11.473514 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:17:11.473526 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:17:11.473556 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-18 05:17:11.473570 | orchestrator | 2026-02-18 05:17:11.473582 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-18 05:17:11.473594 | orchestrator | Wednesday 18 February 2026 05:17:05 +0000 (0:00:03.409) 0:00:09.023 **** 2026-02-18 05:17:11.473608 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:17:11.473621 | orchestrator | 2026-02-18 05:17:11.473633 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-18 05:17:11.473662 | orchestrator | Wednesday 18 February 2026 05:17:08 +0000 (0:00:02.853) 0:00:11.877 **** 2026-02-18 05:17:11.473676 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:11.473702 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:11.473719 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:11.473731 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:11.473742 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:11.473753 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:11.473772 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:14.978433 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978566 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978611 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978696 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978709 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978738 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978760 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978775 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978786 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978804 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978815 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978827 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978838 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:14.978850 | orchestrator | 2026-02-18 05:17:14.978863 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-18 05:17:14.978875 | orchestrator | Wednesday 18 February 2026 05:17:14 +0000 (0:00:05.351) 0:00:17.229 **** 2026-02-18 05:17:14.978889 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:14.978917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:17.555747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.555859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.555894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.555910 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:17:17.555924 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.555935 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:17:17.555947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:17.555960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.555997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:17.556029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.556041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.556053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.556065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:17.556076 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:17:17.556087 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:17:17.556098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:17.556151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.556185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:17.556217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.906593 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:17:18.906748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.906765 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:17:18.906780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:18.906812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.906824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.906836 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:17:18.906847 | orchestrator | 2026-02-18 05:17:18.906859 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-18 05:17:18.906872 | orchestrator | Wednesday 18 February 2026 05:17:17 +0000 (0:00:03.360) 0:00:20.590 **** 2026-02-18 05:17:18.906908 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:18.906921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:18.906933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.906962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:18.906974 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.906985 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:17:18.906997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.907009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.907029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.907041 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:17:18.907053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:18.907065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:18.907085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:32.823836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823873 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:17:32.823879 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:17:32.823883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823888 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:17:32.823891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:32.823896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:17:32.823914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823918 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:17:32.823925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:32.823938 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:17:32.823942 | orchestrator | 2026-02-18 05:17:32.823947 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-18 05:17:32.823952 | orchestrator | Wednesday 18 February 2026 05:17:20 +0000 (0:00:03.244) 0:00:23.834 **** 2026-02-18 05:17:32.823956 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:17:32.823960 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:17:32.823964 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:17:32.823968 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:17:32.823972 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:17:32.823975 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:17:32.823979 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:17:32.823983 | orchestrator | 2026-02-18 05:17:32.823987 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-18 05:17:32.823991 | orchestrator | Wednesday 18 February 2026 05:17:22 +0000 (0:00:02.134) 0:00:25.968 **** 2026-02-18 05:17:32.823994 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:17:32.823998 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:17:32.824002 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:17:32.824005 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:17:32.824009 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:17:32.824013 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:17:32.824017 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:17:32.824020 | orchestrator | 2026-02-18 05:17:32.824024 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-18 05:17:32.824028 | orchestrator | Wednesday 18 February 2026 05:17:24 +0000 (0:00:02.034) 0:00:28.003 **** 2026-02-18 05:17:32.824032 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:17:32.824035 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:17:32.824039 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:17:32.824043 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:17:32.824047 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:17:32.824050 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:17:32.824054 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:17:32.824058 | orchestrator | 2026-02-18 05:17:32.824062 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-18 05:17:32.824065 | orchestrator | Wednesday 18 February 2026 05:17:26 +0000 (0:00:01.984) 0:00:29.987 **** 2026-02-18 05:17:32.824069 | orchestrator | ok: [testbed-manager] 2026-02-18 05:17:32.824074 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:17:32.824078 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:17:32.824082 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:17:32.824086 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:17:32.824090 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:17:32.824093 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:17:32.824097 | orchestrator | 2026-02-18 05:17:32.824101 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-18 05:17:32.824105 | orchestrator | Wednesday 18 February 2026 05:17:29 +0000 (0:00:02.883) 0:00:32.871 **** 2026-02-18 05:17:32.824109 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:32.824118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:35.555750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:35.555864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:35.555880 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:35.555893 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.555905 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.555917 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:35.555928 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.555984 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.556002 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.556015 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.556029 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.556041 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.556052 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:35.556064 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:35.556091 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:54.663957 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:54.664083 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:54.664102 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:54.664115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:54.664128 | orchestrator | 2026-02-18 05:17:54.664141 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-18 05:17:54.664173 | orchestrator | Wednesday 18 February 2026 05:17:35 +0000 (0:00:05.720) 0:00:38.591 **** 2026-02-18 05:17:54.664186 | orchestrator | [WARNING]: Skipped 2026-02-18 05:17:54.664199 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-18 05:17:54.664211 | orchestrator | to this access issue: 2026-02-18 05:17:54.664223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-18 05:17:54.664234 | orchestrator | directory 2026-02-18 05:17:54.664245 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:17:54.664257 | orchestrator | 2026-02-18 05:17:54.664268 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-18 05:17:54.664279 | orchestrator | Wednesday 18 February 2026 05:17:37 +0000 (0:00:02.402) 0:00:40.994 **** 2026-02-18 05:17:54.664290 | orchestrator | [WARNING]: Skipped 2026-02-18 05:17:54.664301 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-18 05:17:54.664312 | orchestrator | to this access issue: 2026-02-18 05:17:54.664323 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-18 05:17:54.664360 | orchestrator | directory 2026-02-18 05:17:54.664372 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:17:54.664383 | orchestrator | 2026-02-18 05:17:54.664400 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-18 05:17:54.664418 | orchestrator | Wednesday 18 February 2026 05:17:39 +0000 (0:00:01.879) 0:00:42.874 **** 2026-02-18 05:17:54.664486 | orchestrator | [WARNING]: Skipped 2026-02-18 05:17:54.664506 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-18 05:17:54.664525 | orchestrator | to this access issue: 2026-02-18 05:17:54.664546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-18 05:17:54.664565 | orchestrator | directory 2026-02-18 05:17:54.664584 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:17:54.664597 | orchestrator | 2026-02-18 05:17:54.664611 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-18 05:17:54.664624 | orchestrator | Wednesday 18 February 2026 05:17:41 +0000 (0:00:01.907) 0:00:44.781 **** 2026-02-18 05:17:54.664636 | orchestrator | [WARNING]: Skipped 2026-02-18 05:17:54.664648 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-18 05:17:54.664660 | orchestrator | to this access issue: 2026-02-18 05:17:54.664673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-18 05:17:54.664685 | orchestrator | directory 2026-02-18 05:17:54.664698 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-18 05:17:54.664710 | orchestrator | 2026-02-18 05:17:54.664723 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-18 05:17:54.664735 | orchestrator | Wednesday 18 February 2026 05:17:43 +0000 (0:00:01.878) 0:00:46.660 **** 2026-02-18 05:17:54.664748 | orchestrator | ok: [testbed-manager] 2026-02-18 05:17:54.664760 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:17:54.664772 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:17:54.664804 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:17:54.664817 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:17:54.664830 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:17:54.664841 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:17:54.664852 | orchestrator | 2026-02-18 05:17:54.664863 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-18 05:17:54.664874 | orchestrator | Wednesday 18 February 2026 05:17:47 +0000 (0:00:04.092) 0:00:50.752 **** 2026-02-18 05:17:54.664885 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:17:54.664898 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:17:54.664909 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:17:54.664927 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:17:54.664938 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:17:54.664949 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:17:54.664960 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-18 05:17:54.664971 | orchestrator | 2026-02-18 05:17:54.664982 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-18 05:17:54.664993 | orchestrator | Wednesday 18 February 2026 05:17:50 +0000 (0:00:03.285) 0:00:54.038 **** 2026-02-18 05:17:54.665004 | orchestrator | ok: [testbed-manager] 2026-02-18 05:17:54.665015 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:17:54.665026 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:17:54.665037 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:17:54.665048 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:17:54.665058 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:17:54.665079 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:17:54.665090 | orchestrator | 2026-02-18 05:17:54.665101 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-18 05:17:54.665112 | orchestrator | Wednesday 18 February 2026 05:17:53 +0000 (0:00:02.763) 0:00:56.802 **** 2026-02-18 05:17:54.665125 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:54.665139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:54.665151 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:54.665163 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:54.665183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:55.520511 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:55.520629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:55.520670 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:55.520683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:55.520695 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:55.520706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:55.520719 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:55.520768 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:55.520782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:55.520800 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:55.520811 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:17:55.520823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:17:55.520835 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:55.520847 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:55.520858 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:17:55.520877 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:05.614600 | orchestrator | 2026-02-18 05:18:05.614716 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-18 05:18:05.614754 | orchestrator | Wednesday 18 February 2026 05:17:56 +0000 (0:00:02.907) 0:00:59.710 **** 2026-02-18 05:18:05.614760 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:18:05.614767 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:18:05.614773 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:18:05.614779 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:18:05.614785 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:18:05.614791 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:18:05.614797 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-18 05:18:05.614804 | orchestrator | 2026-02-18 05:18:05.614810 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-18 05:18:05.614816 | orchestrator | Wednesday 18 February 2026 05:17:59 +0000 (0:00:03.220) 0:01:02.931 **** 2026-02-18 05:18:05.614822 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:18:05.614829 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:18:05.614835 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:18:05.614842 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:18:05.614848 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:18:05.614854 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:18:05.614860 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-18 05:18:05.614866 | orchestrator | 2026-02-18 05:18:05.614872 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-18 05:18:05.614879 | orchestrator | Wednesday 18 February 2026 05:18:03 +0000 (0:00:03.397) 0:01:06.328 **** 2026-02-18 05:18:05.614889 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:18:05.614898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:18:05.614904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:18:05.614911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:18:05.614947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:18:05.614956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:18:05.614963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-18 05:18:05.614971 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:05.614978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:05.614985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:05.614997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:05.615013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230256 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:18:10.230474 | orchestrator | 2026-02-18 05:18:10.230487 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-18 05:18:10.230499 | orchestrator | Wednesday 18 February 2026 05:18:07 +0000 (0:00:04.410) 0:01:10.739 **** 2026-02-18 05:18:10.230511 | orchestrator | changed: [testbed-manager] => { 2026-02-18 05:18:10.230524 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:18:10.230536 | orchestrator | } 2026-02-18 05:18:10.230547 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:18:10.230558 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:18:10.230569 | orchestrator | } 2026-02-18 05:18:10.230579 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:18:10.230590 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:18:10.230601 | orchestrator | } 2026-02-18 05:18:10.230611 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:18:10.230622 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:18:10.230633 | orchestrator | } 2026-02-18 05:18:10.230643 | orchestrator | changed: [testbed-node-3] => { 2026-02-18 05:18:10.230654 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:18:10.230664 | orchestrator | } 2026-02-18 05:18:10.230675 | orchestrator | changed: [testbed-node-4] => { 2026-02-18 05:18:10.230686 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:18:10.230696 | orchestrator | } 2026-02-18 05:18:10.230707 | orchestrator | changed: [testbed-node-5] => { 2026-02-18 05:18:10.230718 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:18:10.230728 | orchestrator | } 2026-02-18 05:18:10.230739 | orchestrator | 2026-02-18 05:18:10.230750 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:18:10.230761 | orchestrator | Wednesday 18 February 2026 05:18:09 +0000 (0:00:02.114) 0:01:12.853 **** 2026-02-18 05:18:10.230773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:18:10.230795 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:10.230807 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:10.230818 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:18:10.230829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:18:10.230857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932166 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:18:16.932204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:18:16.932232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932280 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:18:16.932292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:18:16.932304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932389 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:18:16.932421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:18:16.932433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932465 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:18:16.932476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:18:16.932488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:18:16.932510 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:18:16.932526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-18 05:18:16.932546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:19:46.142480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:19:46.142581 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:19:46.142596 | orchestrator | 2026-02-18 05:19:46.142604 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:19:46.142634 | orchestrator | Wednesday 18 February 2026 05:18:12 +0000 (0:00:02.932) 0:01:15.786 **** 2026-02-18 05:19:46.142642 | orchestrator | 2026-02-18 05:19:46.142650 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:19:46.142657 | orchestrator | Wednesday 18 February 2026 05:18:13 +0000 (0:00:00.489) 0:01:16.276 **** 2026-02-18 05:19:46.142664 | orchestrator | 2026-02-18 05:19:46.142672 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:19:46.142679 | orchestrator | Wednesday 18 February 2026 05:18:13 +0000 (0:00:00.482) 0:01:16.759 **** 2026-02-18 05:19:46.142686 | orchestrator | 2026-02-18 05:19:46.142694 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:19:46.142702 | orchestrator | Wednesday 18 February 2026 05:18:14 +0000 (0:00:00.501) 0:01:17.260 **** 2026-02-18 05:19:46.142709 | orchestrator | 2026-02-18 05:19:46.142716 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:19:46.142723 | orchestrator | Wednesday 18 February 2026 05:18:14 +0000 (0:00:00.472) 0:01:17.733 **** 2026-02-18 05:19:46.142731 | orchestrator | 2026-02-18 05:19:46.142738 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:19:46.142745 | orchestrator | Wednesday 18 February 2026 05:18:15 +0000 (0:00:00.731) 0:01:18.465 **** 2026-02-18 05:19:46.142752 | orchestrator | 2026-02-18 05:19:46.142760 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-18 05:19:46.142768 | orchestrator | Wednesday 18 February 2026 05:18:16 +0000 (0:00:00.631) 0:01:19.096 **** 2026-02-18 05:19:46.142775 | orchestrator | 2026-02-18 05:19:46.142782 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-18 05:19:46.142790 | orchestrator | Wednesday 18 February 2026 05:18:16 +0000 (0:00:00.861) 0:01:19.958 **** 2026-02-18 05:19:46.142798 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:19:46.142805 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:19:46.142812 | orchestrator | changed: [testbed-manager] 2026-02-18 05:19:46.142820 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:19:46.142827 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:19:46.142834 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:19:46.142841 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:19:46.142848 | orchestrator | 2026-02-18 05:19:46.142855 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-18 05:19:46.142862 | orchestrator | Wednesday 18 February 2026 05:18:54 +0000 (0:00:37.286) 0:01:57.244 **** 2026-02-18 05:19:46.142869 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:19:46.142877 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:19:46.142884 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:19:46.142892 | orchestrator | changed: [testbed-manager] 2026-02-18 05:19:46.142900 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:19:46.142907 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:19:46.142915 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:19:46.142923 | orchestrator | 2026-02-18 05:19:46.142931 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-18 05:19:46.142939 | orchestrator | Wednesday 18 February 2026 05:19:30 +0000 (0:00:36.500) 0:02:33.745 **** 2026-02-18 05:19:46.142947 | orchestrator | ok: [testbed-manager] 2026-02-18 05:19:46.143014 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:19:46.143023 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:19:46.143031 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:19:46.143038 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:19:46.143046 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:19:46.143062 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:19:46.143071 | orchestrator | 2026-02-18 05:19:46.143079 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-18 05:19:46.143088 | orchestrator | Wednesday 18 February 2026 05:19:33 +0000 (0:00:03.015) 0:02:36.760 **** 2026-02-18 05:19:46.143095 | orchestrator | changed: [testbed-manager] 2026-02-18 05:19:46.143104 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:19:46.143127 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:19:46.143136 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:19:46.143144 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:19:46.143152 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:19:46.143160 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:19:46.143168 | orchestrator | 2026-02-18 05:19:46.143177 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:19:46.143199 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:19:46.143209 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:19:46.143217 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:19:46.143225 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:19:46.143251 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:19:46.143260 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:19:46.143269 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:19:46.143277 | orchestrator | 2026-02-18 05:19:46.143285 | orchestrator | 2026-02-18 05:19:46.143293 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:19:46.143302 | orchestrator | Wednesday 18 February 2026 05:19:45 +0000 (0:00:11.868) 0:02:48.629 **** 2026-02-18 05:19:46.143310 | orchestrator | =============================================================================== 2026-02-18 05:19:46.143318 | orchestrator | common : Restart fluentd container ------------------------------------- 37.29s 2026-02-18 05:19:46.143326 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.50s 2026-02-18 05:19:46.143334 | orchestrator | common : Restart cron container ---------------------------------------- 11.87s 2026-02-18 05:19:46.143342 | orchestrator | common : Copying over config.json files for services -------------------- 5.72s 2026-02-18 05:19:46.143350 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.35s 2026-02-18 05:19:46.143358 | orchestrator | service-check-containers : common | Check containers -------------------- 4.41s 2026-02-18 05:19:46.143367 | orchestrator | common : Flush handlers ------------------------------------------------- 4.17s 2026-02-18 05:19:46.143374 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.09s 2026-02-18 05:19:46.143381 | orchestrator | common : include_tasks -------------------------------------------------- 3.63s 2026-02-18 05:19:46.143388 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.41s 2026-02-18 05:19:46.143396 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.40s 2026-02-18 05:19:46.143404 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.36s 2026-02-18 05:19:46.143413 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.29s 2026-02-18 05:19:46.143420 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.24s 2026-02-18 05:19:46.143427 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.22s 2026-02-18 05:19:46.143433 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.02s 2026-02-18 05:19:46.143440 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.93s 2026-02-18 05:19:46.143447 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.91s 2026-02-18 05:19:46.143463 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.88s 2026-02-18 05:19:46.143471 | orchestrator | common : include_tasks -------------------------------------------------- 2.85s 2026-02-18 05:19:46.460948 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-18 05:19:48.629683 | orchestrator | 2026-02-18 05:19:48 | INFO  | Task 2012b74a-f0a7-4689-83fa-54bc0f5b6d7a (loadbalancer) was prepared for execution. 2026-02-18 05:19:48.629803 | orchestrator | 2026-02-18 05:19:48 | INFO  | It takes a moment until task 2012b74a-f0a7-4689-83fa-54bc0f5b6d7a (loadbalancer) has been started and output is visible here. 2026-02-18 05:20:24.614370 | orchestrator | 2026-02-18 05:20:24.614507 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:20:24.614526 | orchestrator | 2026-02-18 05:20:24.614538 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:20:24.614550 | orchestrator | Wednesday 18 February 2026 05:19:54 +0000 (0:00:01.390) 0:00:01.390 **** 2026-02-18 05:20:24.614561 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:24.614573 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:24.614584 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:24.614595 | orchestrator | 2026-02-18 05:20:24.614605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:20:24.614616 | orchestrator | Wednesday 18 February 2026 05:19:56 +0000 (0:00:01.954) 0:00:03.344 **** 2026-02-18 05:20:24.614629 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-18 05:20:24.614639 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-18 05:20:24.614650 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-18 05:20:24.614661 | orchestrator | 2026-02-18 05:20:24.614672 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-18 05:20:24.614682 | orchestrator | 2026-02-18 05:20:24.614710 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-18 05:20:24.614721 | orchestrator | Wednesday 18 February 2026 05:19:59 +0000 (0:00:02.496) 0:00:05.841 **** 2026-02-18 05:20:24.614733 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:20:24.614744 | orchestrator | 2026-02-18 05:20:24.614755 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-18 05:20:24.614766 | orchestrator | Wednesday 18 February 2026 05:20:01 +0000 (0:00:02.498) 0:00:08.340 **** 2026-02-18 05:20:24.614777 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:24.614787 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:24.614798 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:24.614809 | orchestrator | 2026-02-18 05:20:24.614914 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-18 05:20:24.614936 | orchestrator | Wednesday 18 February 2026 05:20:04 +0000 (0:00:02.232) 0:00:10.572 **** 2026-02-18 05:20:24.614957 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:24.614978 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:24.614998 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:24.615017 | orchestrator | 2026-02-18 05:20:24.615036 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-18 05:20:24.615057 | orchestrator | Wednesday 18 February 2026 05:20:06 +0000 (0:00:02.427) 0:00:12.999 **** 2026-02-18 05:20:24.615078 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:24.615097 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:24.615115 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:24.615128 | orchestrator | 2026-02-18 05:20:24.615141 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-18 05:20:24.615154 | orchestrator | Wednesday 18 February 2026 05:20:08 +0000 (0:00:02.021) 0:00:15.021 **** 2026-02-18 05:20:24.615166 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:20:24.615179 | orchestrator | 2026-02-18 05:20:24.615219 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-18 05:20:24.615232 | orchestrator | Wednesday 18 February 2026 05:20:10 +0000 (0:00:02.011) 0:00:17.032 **** 2026-02-18 05:20:24.615244 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:24.615255 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:24.615266 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:24.615277 | orchestrator | 2026-02-18 05:20:24.615287 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-18 05:20:24.615298 | orchestrator | Wednesday 18 February 2026 05:20:12 +0000 (0:00:01.802) 0:00:18.835 **** 2026-02-18 05:20:24.615310 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-18 05:20:24.615321 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-18 05:20:24.615331 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-18 05:20:24.615342 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-18 05:20:24.615359 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-18 05:20:24.615378 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-18 05:20:24.615397 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-18 05:20:24.615419 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-18 05:20:24.615438 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-18 05:20:24.615455 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-18 05:20:24.615471 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-18 05:20:24.615482 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-18 05:20:24.615492 | orchestrator | 2026-02-18 05:20:24.615503 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-18 05:20:24.615514 | orchestrator | Wednesday 18 February 2026 05:20:15 +0000 (0:00:03.470) 0:00:22.305 **** 2026-02-18 05:20:24.615525 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-18 05:20:24.615536 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-18 05:20:24.615546 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-18 05:20:24.615557 | orchestrator | 2026-02-18 05:20:24.615568 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-18 05:20:24.615598 | orchestrator | Wednesday 18 February 2026 05:20:17 +0000 (0:00:01.898) 0:00:24.204 **** 2026-02-18 05:20:24.615609 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-18 05:20:24.615620 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-18 05:20:24.615631 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-18 05:20:24.615641 | orchestrator | 2026-02-18 05:20:24.615652 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-18 05:20:24.615663 | orchestrator | Wednesday 18 February 2026 05:20:19 +0000 (0:00:02.212) 0:00:26.417 **** 2026-02-18 05:20:24.615674 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-18 05:20:24.615685 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:20:24.615696 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-18 05:20:24.615707 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:20:24.615717 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-18 05:20:24.615728 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:20:24.615739 | orchestrator | 2026-02-18 05:20:24.615750 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-18 05:20:24.615761 | orchestrator | Wednesday 18 February 2026 05:20:21 +0000 (0:00:01.921) 0:00:28.338 **** 2026-02-18 05:20:24.615782 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:24.615856 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:24.615872 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:24.615883 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:24.615894 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:24.615915 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:35.819423 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:20:35.819531 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:20:35.819542 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:20:35.819549 | orchestrator | 2026-02-18 05:20:35.819556 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-18 05:20:35.819563 | orchestrator | Wednesday 18 February 2026 05:20:24 +0000 (0:00:02.695) 0:00:31.034 **** 2026-02-18 05:20:35.819569 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:35.819576 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:35.819582 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:35.819588 | orchestrator | 2026-02-18 05:20:35.819594 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-18 05:20:35.819600 | orchestrator | Wednesday 18 February 2026 05:20:26 +0000 (0:00:02.027) 0:00:33.061 **** 2026-02-18 05:20:35.819606 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-18 05:20:35.819613 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-18 05:20:35.819619 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-18 05:20:35.819625 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-18 05:20:35.819631 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-18 05:20:35.819637 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-18 05:20:35.819642 | orchestrator | 2026-02-18 05:20:35.819648 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-18 05:20:35.819654 | orchestrator | Wednesday 18 February 2026 05:20:29 +0000 (0:00:02.847) 0:00:35.908 **** 2026-02-18 05:20:35.819660 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:35.819666 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:35.819671 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:35.819677 | orchestrator | 2026-02-18 05:20:35.819683 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-18 05:20:35.819689 | orchestrator | Wednesday 18 February 2026 05:20:31 +0000 (0:00:02.309) 0:00:38.218 **** 2026-02-18 05:20:35.819694 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:20:35.819700 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:20:35.819706 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:20:35.819712 | orchestrator | 2026-02-18 05:20:35.819717 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-18 05:20:35.819723 | orchestrator | Wednesday 18 February 2026 05:20:34 +0000 (0:00:02.276) 0:00:40.494 **** 2026-02-18 05:20:35.819730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 05:20:35.819754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:20:35.819765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:20:35.819814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 05:20:35.819823 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:20:35.819829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 05:20:35.819835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:20:35.819841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:20:35.819852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 05:20:35.819859 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:20:35.819870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 05:20:39.872429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:20:39.872549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:20:39.872562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 05:20:39.872573 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:20:39.872583 | orchestrator | 2026-02-18 05:20:39.872592 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-18 05:20:39.872603 | orchestrator | Wednesday 18 February 2026 05:20:35 +0000 (0:00:01.740) 0:00:42.235 **** 2026-02-18 05:20:39.872634 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:39.872661 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:39.872674 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:39.872698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:39.872707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:20:39.872717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 05:20:39.872731 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:39.872740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:20:39.872753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 05:20:39.872817 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:54.098364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:20:54.098487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888', '__omit_place_holder__88d40ba57f38cfab172e853ec9662bdfe0d68888'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-18 05:20:54.098506 | orchestrator | 2026-02-18 05:20:54.098521 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-18 05:20:54.098559 | orchestrator | Wednesday 18 February 2026 05:20:39 +0000 (0:00:04.059) 0:00:46.294 **** 2026-02-18 05:20:54.098572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:54.098585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:54.098597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 05:20:54.098622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:54.098652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:54.098665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:20:54.098685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:20:54.098697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:20:54.098740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:20:54.098753 | orchestrator | 2026-02-18 05:20:54.098765 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-18 05:20:54.098776 | orchestrator | Wednesday 18 February 2026 05:20:44 +0000 (0:00:04.834) 0:00:51.129 **** 2026-02-18 05:20:54.098788 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-18 05:20:54.098800 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-18 05:20:54.098812 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-18 05:20:54.098823 | orchestrator | 2026-02-18 05:20:54.098839 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-18 05:20:54.098851 | orchestrator | Wednesday 18 February 2026 05:20:47 +0000 (0:00:02.696) 0:00:53.825 **** 2026-02-18 05:20:54.098862 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-18 05:20:54.098875 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-18 05:20:54.098888 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-18 05:20:54.098901 | orchestrator | 2026-02-18 05:20:54.098914 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-18 05:20:54.098927 | orchestrator | Wednesday 18 February 2026 05:20:51 +0000 (0:00:04.599) 0:00:58.425 **** 2026-02-18 05:20:54.098941 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:20:54.098956 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:20:54.098977 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:15.039195 | orchestrator | 2026-02-18 05:21:15.039285 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-18 05:21:15.039295 | orchestrator | Wednesday 18 February 2026 05:20:54 +0000 (0:00:02.091) 0:01:00.516 **** 2026-02-18 05:21:15.039302 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-18 05:21:15.039308 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-18 05:21:15.039331 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-18 05:21:15.039337 | orchestrator | 2026-02-18 05:21:15.039344 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-18 05:21:15.039349 | orchestrator | Wednesday 18 February 2026 05:20:57 +0000 (0:00:03.037) 0:01:03.554 **** 2026-02-18 05:21:15.039355 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-18 05:21:15.039363 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-18 05:21:15.039369 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-18 05:21:15.039374 | orchestrator | 2026-02-18 05:21:15.039380 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-18 05:21:15.039386 | orchestrator | Wednesday 18 February 2026 05:20:59 +0000 (0:00:02.846) 0:01:06.400 **** 2026-02-18 05:21:15.039392 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:21:15.039398 | orchestrator | 2026-02-18 05:21:15.039404 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-18 05:21:15.039410 | orchestrator | Wednesday 18 February 2026 05:21:01 +0000 (0:00:02.011) 0:01:08.411 **** 2026-02-18 05:21:15.039416 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-18 05:21:15.039423 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-18 05:21:15.039428 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-18 05:21:15.039434 | orchestrator | 2026-02-18 05:21:15.039440 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-18 05:21:15.039446 | orchestrator | Wednesday 18 February 2026 05:21:04 +0000 (0:00:02.708) 0:01:11.120 **** 2026-02-18 05:21:15.039452 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-18 05:21:15.039458 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-18 05:21:15.039463 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-18 05:21:15.039469 | orchestrator | 2026-02-18 05:21:15.039475 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-18 05:21:15.039480 | orchestrator | Wednesday 18 February 2026 05:21:07 +0000 (0:00:02.631) 0:01:13.752 **** 2026-02-18 05:21:15.039486 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:15.039493 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:15.039499 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:15.039505 | orchestrator | 2026-02-18 05:21:15.039511 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-18 05:21:15.039517 | orchestrator | Wednesday 18 February 2026 05:21:08 +0000 (0:00:01.485) 0:01:15.238 **** 2026-02-18 05:21:15.039523 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:15.039528 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:15.039534 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:15.039540 | orchestrator | 2026-02-18 05:21:15.039546 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-18 05:21:15.039552 | orchestrator | Wednesday 18 February 2026 05:21:10 +0000 (0:00:01.982) 0:01:17.220 **** 2026-02-18 05:21:15.039560 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 05:21:15.039586 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 05:21:15.039604 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 05:21:15.039611 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:21:15.039617 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:21:15.039624 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:21:15.039631 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:21:15.039690 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:21:15.039709 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:21:18.859169 | orchestrator | 2026-02-18 05:21:18.859275 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-18 05:21:18.859291 | orchestrator | Wednesday 18 February 2026 05:21:15 +0000 (0:00:04.231) 0:01:21.452 **** 2026-02-18 05:21:18.859306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 05:21:18.859320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:18.859331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:18.859343 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:18.859354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 05:21:18.859371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:18.859435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:18.859456 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:18.859496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 05:21:18.859515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:18.859532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:18.859548 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:18.859565 | orchestrator | 2026-02-18 05:21:18.859582 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-18 05:21:18.859598 | orchestrator | Wednesday 18 February 2026 05:21:16 +0000 (0:00:01.646) 0:01:23.098 **** 2026-02-18 05:21:18.859614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 05:21:18.859666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:18.859692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:18.859706 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:18.859734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 05:21:30.579304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:30.579391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:30.579400 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:30.579407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 05:21:30.579413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:30.579445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:30.579451 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:30.579456 | orchestrator | 2026-02-18 05:21:30.579462 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-18 05:21:30.579468 | orchestrator | Wednesday 18 February 2026 05:21:18 +0000 (0:00:02.180) 0:01:25.279 **** 2026-02-18 05:21:30.579473 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-18 05:21:30.579479 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-18 05:21:30.579484 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-18 05:21:30.579488 | orchestrator | 2026-02-18 05:21:30.579493 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-18 05:21:30.579497 | orchestrator | Wednesday 18 February 2026 05:21:21 +0000 (0:00:02.522) 0:01:27.802 **** 2026-02-18 05:21:30.579502 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-18 05:21:30.579506 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-18 05:21:30.579511 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-18 05:21:30.579515 | orchestrator | 2026-02-18 05:21:30.579530 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-18 05:21:30.579535 | orchestrator | Wednesday 18 February 2026 05:21:23 +0000 (0:00:02.464) 0:01:30.266 **** 2026-02-18 05:21:30.579540 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 05:21:30.579544 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 05:21:30.579549 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 05:21:30.579553 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:30.579558 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-18 05:21:30.579562 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 05:21:30.579567 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:30.579572 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-18 05:21:30.579576 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:30.579581 | orchestrator | 2026-02-18 05:21:30.579585 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-18 05:21:30.579590 | orchestrator | Wednesday 18 February 2026 05:21:26 +0000 (0:00:02.606) 0:01:32.873 **** 2026-02-18 05:21:30.579655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 05:21:30.579661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 05:21:30.579666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 05:21:30.579671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:21:30.579682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:21:34.525341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:21:34.525502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:21:34.525532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:21:34.525568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:21:34.525665 | orchestrator | 2026-02-18 05:21:34.525688 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-18 05:21:34.525706 | orchestrator | Wednesday 18 February 2026 05:21:30 +0000 (0:00:04.127) 0:01:37.001 **** 2026-02-18 05:21:34.525735 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:21:34.525758 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:21:34.525772 | orchestrator | } 2026-02-18 05:21:34.525787 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:21:34.525802 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:21:34.525817 | orchestrator | } 2026-02-18 05:21:34.525830 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:21:34.525844 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:21:34.525860 | orchestrator | } 2026-02-18 05:21:34.525874 | orchestrator | 2026-02-18 05:21:34.525900 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:21:34.525917 | orchestrator | Wednesday 18 February 2026 05:21:32 +0000 (0:00:01.535) 0:01:38.537 **** 2026-02-18 05:21:34.525936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 05:21:34.525978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:34.526011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:34.526110 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:34.526130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 05:21:34.526147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:34.526163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:34.526178 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:34.526201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 05:21:34.526218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:21:34.526249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:21:40.121018 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:40.121158 | orchestrator | 2026-02-18 05:21:40.121175 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-18 05:21:40.121189 | orchestrator | Wednesday 18 February 2026 05:21:34 +0000 (0:00:02.405) 0:01:40.942 **** 2026-02-18 05:21:40.121200 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:21:40.121225 | orchestrator | 2026-02-18 05:21:40.121946 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-18 05:21:40.122091 | orchestrator | Wednesday 18 February 2026 05:21:36 +0000 (0:00:02.023) 0:01:42.965 **** 2026-02-18 05:21:40.122118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:21:40.122138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 05:21:40.122170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:40.122184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 05:21:40.122251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:21:40.122266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 05:21:40.122278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:40.122289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 05:21:40.122306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:21:40.122319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 05:21:40.122345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:41.856256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 05:21:41.856356 | orchestrator | 2026-02-18 05:21:41.856372 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-18 05:21:41.856385 | orchestrator | Wednesday 18 February 2026 05:21:41 +0000 (0:00:04.660) 0:01:47.626 **** 2026-02-18 05:21:41.856399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:21:41.856432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 05:21:41.856447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:41.856482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 05:21:41.856495 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:41.856526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:21:41.856539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 05:21:41.856551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:41.856641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 05:21:41.856655 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:41.856667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:21:41.856687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-18 05:21:41.856707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:57.003606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-18 05:21:57.003726 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:57.003741 | orchestrator | 2026-02-18 05:21:57.003751 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-18 05:21:57.003761 | orchestrator | Wednesday 18 February 2026 05:21:43 +0000 (0:00:01.953) 0:01:49.580 **** 2026-02-18 05:21:57.003771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:21:57.003782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:21:57.003792 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:57.003800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:21:57.003809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:21:57.003848 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:57.003857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:21:57.003866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:21:57.003873 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:21:57.003881 | orchestrator | 2026-02-18 05:21:57.003890 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-18 05:21:57.003898 | orchestrator | Wednesday 18 February 2026 05:21:45 +0000 (0:00:02.265) 0:01:51.846 **** 2026-02-18 05:21:57.003906 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:21:57.003914 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:21:57.003922 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:21:57.003930 | orchestrator | 2026-02-18 05:21:57.003939 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-18 05:21:57.003947 | orchestrator | Wednesday 18 February 2026 05:21:47 +0000 (0:00:02.248) 0:01:54.095 **** 2026-02-18 05:21:57.003955 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:21:57.003963 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:21:57.003970 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:21:57.003978 | orchestrator | 2026-02-18 05:21:57.003986 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-18 05:21:57.003994 | orchestrator | Wednesday 18 February 2026 05:21:50 +0000 (0:00:02.914) 0:01:57.009 **** 2026-02-18 05:21:57.004002 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:21:57.004010 | orchestrator | 2026-02-18 05:21:57.004018 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-18 05:21:57.004026 | orchestrator | Wednesday 18 February 2026 05:21:52 +0000 (0:00:01.774) 0:01:58.784 **** 2026-02-18 05:21:57.004052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:21:57.004064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:57.004080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:21:57.004094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:21:57.004104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:57.004114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:21:57.004132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:21:58.731765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:58.731885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:21:58.731902 | orchestrator | 2026-02-18 05:21:58.731916 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-18 05:21:58.731929 | orchestrator | Wednesday 18 February 2026 05:21:56 +0000 (0:00:04.637) 0:02:03.422 **** 2026-02-18 05:21:58.731947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:21:58.731969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:58.731988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:21:58.732068 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:21:58.732107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:21:58.732127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:58.732139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:21:58.732150 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:21:58.732162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:21:58.732174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-18 05:21:58.732202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:22:15.196672 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:15.196790 | orchestrator | 2026-02-18 05:22:15.196807 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-18 05:22:15.196821 | orchestrator | Wednesday 18 February 2026 05:21:58 +0000 (0:00:01.729) 0:02:05.152 **** 2026-02-18 05:22:15.196834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:15.196849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:15.196862 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:15.196874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:15.196886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:15.196897 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:15.196908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:15.196919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:15.196930 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:15.196941 | orchestrator | 2026-02-18 05:22:15.196952 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-18 05:22:15.196963 | orchestrator | Wednesday 18 February 2026 05:22:00 +0000 (0:00:01.910) 0:02:07.062 **** 2026-02-18 05:22:15.196974 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:22:15.196987 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:22:15.196998 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:22:15.197008 | orchestrator | 2026-02-18 05:22:15.197019 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-18 05:22:15.197030 | orchestrator | Wednesday 18 February 2026 05:22:02 +0000 (0:00:02.272) 0:02:09.334 **** 2026-02-18 05:22:15.197041 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:22:15.197075 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:22:15.197087 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:22:15.197098 | orchestrator | 2026-02-18 05:22:15.197109 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-18 05:22:15.197119 | orchestrator | Wednesday 18 February 2026 05:22:05 +0000 (0:00:02.822) 0:02:12.156 **** 2026-02-18 05:22:15.197130 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:15.197141 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:15.197152 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:15.197162 | orchestrator | 2026-02-18 05:22:15.197173 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-18 05:22:15.197184 | orchestrator | Wednesday 18 February 2026 05:22:07 +0000 (0:00:01.360) 0:02:13.516 **** 2026-02-18 05:22:15.197195 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:22:15.197206 | orchestrator | 2026-02-18 05:22:15.197219 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-18 05:22:15.197231 | orchestrator | Wednesday 18 February 2026 05:22:08 +0000 (0:00:01.728) 0:02:15.245 **** 2026-02-18 05:22:15.197246 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-18 05:22:15.197301 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-18 05:22:15.197316 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-18 05:22:15.197329 | orchestrator | 2026-02-18 05:22:15.197342 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-18 05:22:15.197354 | orchestrator | Wednesday 18 February 2026 05:22:12 +0000 (0:00:03.738) 0:02:18.984 **** 2026-02-18 05:22:15.197367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-18 05:22:15.197388 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:15.197401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-18 05:22:15.197413 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:15.197434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-18 05:22:27.260088 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:27.260208 | orchestrator | 2026-02-18 05:22:27.260225 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-18 05:22:27.260238 | orchestrator | Wednesday 18 February 2026 05:22:15 +0000 (0:00:02.631) 0:02:21.615 **** 2026-02-18 05:22:27.260268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 05:22:27.260284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 05:22:27.260297 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:27.260309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 05:22:27.260340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 05:22:27.260352 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:27.260364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 05:22:27.260376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-18 05:22:27.260387 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:27.260398 | orchestrator | 2026-02-18 05:22:27.260409 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-18 05:22:27.260480 | orchestrator | Wednesday 18 February 2026 05:22:17 +0000 (0:00:02.797) 0:02:24.413 **** 2026-02-18 05:22:27.260492 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:27.260504 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:27.260515 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:27.260525 | orchestrator | 2026-02-18 05:22:27.260537 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-18 05:22:27.260554 | orchestrator | Wednesday 18 February 2026 05:22:19 +0000 (0:00:01.459) 0:02:25.873 **** 2026-02-18 05:22:27.260572 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:27.260590 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:27.260610 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:27.260629 | orchestrator | 2026-02-18 05:22:27.260649 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-18 05:22:27.260663 | orchestrator | Wednesday 18 February 2026 05:22:21 +0000 (0:00:02.418) 0:02:28.291 **** 2026-02-18 05:22:27.260675 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:22:27.260687 | orchestrator | 2026-02-18 05:22:27.260700 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-18 05:22:27.260712 | orchestrator | Wednesday 18 February 2026 05:22:23 +0000 (0:00:01.742) 0:02:30.034 **** 2026-02-18 05:22:27.260757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:22:27.260785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:22:27.260799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 05:22:27.260813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 05:22:27.260828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:22:27.260855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:22:29.270780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270876 | orchestrator | 2026-02-18 05:22:29.270888 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-18 05:22:29.270899 | orchestrator | Wednesday 18 February 2026 05:22:28 +0000 (0:00:04.759) 0:02:34.794 **** 2026-02-18 05:22:29.270910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:22:29.270922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 05:22:29.270960 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:29.270987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:22:40.889459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:22:40.889561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 05:22:40.889571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 05:22:40.889578 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:40.889601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:22:40.889632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:22:40.889653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-18 05:22:40.889660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-18 05:22:40.889667 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:40.889674 | orchestrator | 2026-02-18 05:22:40.889682 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-18 05:22:40.889691 | orchestrator | Wednesday 18 February 2026 05:22:30 +0000 (0:00:02.074) 0:02:36.869 **** 2026-02-18 05:22:40.889699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:40.889707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:40.889716 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:40.889723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:40.889730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:40.889744 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:40.889751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:40.889758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:22:40.889765 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:40.889772 | orchestrator | 2026-02-18 05:22:40.889782 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-18 05:22:40.889789 | orchestrator | Wednesday 18 February 2026 05:22:32 +0000 (0:00:02.215) 0:02:39.084 **** 2026-02-18 05:22:40.889796 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:22:40.889803 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:22:40.889810 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:22:40.889817 | orchestrator | 2026-02-18 05:22:40.889823 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-18 05:22:40.889830 | orchestrator | Wednesday 18 February 2026 05:22:34 +0000 (0:00:02.321) 0:02:41.405 **** 2026-02-18 05:22:40.889837 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:22:40.889844 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:22:40.889851 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:22:40.889858 | orchestrator | 2026-02-18 05:22:40.889864 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-18 05:22:40.889871 | orchestrator | Wednesday 18 February 2026 05:22:37 +0000 (0:00:02.878) 0:02:44.283 **** 2026-02-18 05:22:40.889878 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:40.889885 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:40.889892 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:40.889898 | orchestrator | 2026-02-18 05:22:40.889905 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-18 05:22:40.889912 | orchestrator | Wednesday 18 February 2026 05:22:39 +0000 (0:00:01.623) 0:02:45.907 **** 2026-02-18 05:22:40.889919 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:40.889926 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:40.889936 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:22:46.495740 | orchestrator | 2026-02-18 05:22:46.495841 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-18 05:22:46.495857 | orchestrator | Wednesday 18 February 2026 05:22:40 +0000 (0:00:01.406) 0:02:47.314 **** 2026-02-18 05:22:46.495868 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:22:46.495878 | orchestrator | 2026-02-18 05:22:46.495889 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-18 05:22:46.495899 | orchestrator | Wednesday 18 February 2026 05:22:42 +0000 (0:00:01.932) 0:02:49.246 **** 2026-02-18 05:22:46.495914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:22:46.495951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 05:22:46.495964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 05:22:46.495988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 05:22:46.495999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 05:22:46.496026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:22:46.496037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 05:22:46.496055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:22:46.496066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 05:22:46.496082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 05:22:46.496092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 05:22:46.496110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:22:48.422619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 05:22:48.422736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 05:22:48.422894 | orchestrator | 2026-02-18 05:22:48.422915 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-18 05:22:48.422936 | orchestrator | Wednesday 18 February 2026 05:22:47 +0000 (0:00:04.930) 0:02:54.176 **** 2026-02-18 05:22:48.422958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:22:48.422995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 05:22:48.423016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.726451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.726566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.726583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.726597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.726609 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:22:49.726625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:22:49.727419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 05:22:49.727450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.727462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.727473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.727485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.727496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 05:22:49.727518 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:22:49.727547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:23:05.031759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-18 05:23:05.031881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-18 05:23:05.031899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-18 05:23:05.031910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-18 05:23:05.031940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:23:05.031972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-18 05:23:05.031985 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:05.031999 | orchestrator | 2026-02-18 05:23:05.032011 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-18 05:23:05.032024 | orchestrator | Wednesday 18 February 2026 05:22:49 +0000 (0:00:01.978) 0:02:56.154 **** 2026-02-18 05:23:05.032052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:05.032067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:05.032081 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:05.032092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:05.032104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:05.032115 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:05.032126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:05.032137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:05.032147 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:05.032158 | orchestrator | 2026-02-18 05:23:05.032169 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-18 05:23:05.032180 | orchestrator | Wednesday 18 February 2026 05:22:51 +0000 (0:00:02.077) 0:02:58.232 **** 2026-02-18 05:23:05.032192 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:05.032204 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:05.032215 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:05.032226 | orchestrator | 2026-02-18 05:23:05.032237 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-18 05:23:05.032248 | orchestrator | Wednesday 18 February 2026 05:22:54 +0000 (0:00:02.299) 0:03:00.532 **** 2026-02-18 05:23:05.032258 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:05.032277 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:05.032291 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:05.032303 | orchestrator | 2026-02-18 05:23:05.032345 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-18 05:23:05.032360 | orchestrator | Wednesday 18 February 2026 05:22:56 +0000 (0:00:02.883) 0:03:03.415 **** 2026-02-18 05:23:05.032373 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:05.032386 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:05.032400 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:05.032413 | orchestrator | 2026-02-18 05:23:05.032426 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-18 05:23:05.032439 | orchestrator | Wednesday 18 February 2026 05:22:58 +0000 (0:00:01.401) 0:03:04.817 **** 2026-02-18 05:23:05.032452 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:23:05.032465 | orchestrator | 2026-02-18 05:23:05.032478 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-18 05:23:05.032491 | orchestrator | Wednesday 18 February 2026 05:23:00 +0000 (0:00:01.969) 0:03:06.786 **** 2026-02-18 05:23:05.032525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 05:23:06.186966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 05:23:06.187106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 05:23:06.187156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 05:23:06.187181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-18 05:23:06.187200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 05:23:09.762775 | orchestrator | 2026-02-18 05:23:09.762879 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-18 05:23:09.762894 | orchestrator | Wednesday 18 February 2026 05:23:06 +0000 (0:00:05.829) 0:03:12.616 **** 2026-02-18 05:23:09.762929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 05:23:09.762947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 05:23:09.763012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 05:23:09.763027 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:09.763041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 05:23:09.763053 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:09.763082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-18 05:23:28.551464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-18 05:23:28.551587 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:28.551605 | orchestrator | 2026-02-18 05:23:28.551619 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-18 05:23:28.551632 | orchestrator | Wednesday 18 February 2026 05:23:10 +0000 (0:00:04.678) 0:03:17.295 **** 2026-02-18 05:23:28.551669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 05:23:28.551684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 05:23:28.551696 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:28.551707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 05:23:28.551772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 05:23:28.551786 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:28.551814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 05:23:28.551826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-18 05:23:28.551837 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:28.551849 | orchestrator | 2026-02-18 05:23:28.551860 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-18 05:23:28.551871 | orchestrator | Wednesday 18 February 2026 05:23:15 +0000 (0:00:04.749) 0:03:22.044 **** 2026-02-18 05:23:28.551882 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:28.551894 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:28.551905 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:28.551918 | orchestrator | 2026-02-18 05:23:28.551939 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-18 05:23:28.551952 | orchestrator | Wednesday 18 February 2026 05:23:17 +0000 (0:00:02.296) 0:03:24.341 **** 2026-02-18 05:23:28.551964 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:28.551976 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:28.551988 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:28.552000 | orchestrator | 2026-02-18 05:23:28.552013 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-18 05:23:28.552025 | orchestrator | Wednesday 18 February 2026 05:23:20 +0000 (0:00:02.894) 0:03:27.235 **** 2026-02-18 05:23:28.552038 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:28.552050 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:28.552062 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:28.552074 | orchestrator | 2026-02-18 05:23:28.552086 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-18 05:23:28.552099 | orchestrator | Wednesday 18 February 2026 05:23:22 +0000 (0:00:01.617) 0:03:28.853 **** 2026-02-18 05:23:28.552111 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:23:28.552123 | orchestrator | 2026-02-18 05:23:28.552136 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-18 05:23:28.552148 | orchestrator | Wednesday 18 February 2026 05:23:24 +0000 (0:00:01.638) 0:03:30.492 **** 2026-02-18 05:23:28.552162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:23:28.552185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:23:45.527137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:23:45.527334 | orchestrator | 2026-02-18 05:23:45.527364 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-18 05:23:45.527385 | orchestrator | Wednesday 18 February 2026 05:23:28 +0000 (0:00:04.485) 0:03:34.977 **** 2026-02-18 05:23:45.527439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:23:45.527461 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:45.527482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:23:45.527503 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:45.527522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:23:45.527541 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:45.527559 | orchestrator | 2026-02-18 05:23:45.527579 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-18 05:23:45.527601 | orchestrator | Wednesday 18 February 2026 05:23:30 +0000 (0:00:01.845) 0:03:36.823 **** 2026-02-18 05:23:45.527622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:45.527647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:45.527669 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:45.527729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:45.527750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:45.527767 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:45.527797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:45.527816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:23:45.527834 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:45.527854 | orchestrator | 2026-02-18 05:23:45.527872 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-18 05:23:45.527891 | orchestrator | Wednesday 18 February 2026 05:23:31 +0000 (0:00:01.510) 0:03:38.334 **** 2026-02-18 05:23:45.527910 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:45.527929 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:45.527949 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:45.527968 | orchestrator | 2026-02-18 05:23:45.527986 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-18 05:23:45.528003 | orchestrator | Wednesday 18 February 2026 05:23:34 +0000 (0:00:02.356) 0:03:40.691 **** 2026-02-18 05:23:45.528022 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:45.528040 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:45.528059 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:45.528077 | orchestrator | 2026-02-18 05:23:45.528096 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-18 05:23:45.528115 | orchestrator | Wednesday 18 February 2026 05:23:37 +0000 (0:00:02.955) 0:03:43.646 **** 2026-02-18 05:23:45.528133 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:45.528153 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:45.528164 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:45.528175 | orchestrator | 2026-02-18 05:23:45.528186 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-18 05:23:45.528196 | orchestrator | Wednesday 18 February 2026 05:23:38 +0000 (0:00:01.423) 0:03:45.070 **** 2026-02-18 05:23:45.528207 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:23:45.528269 | orchestrator | 2026-02-18 05:23:45.528281 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-18 05:23:45.528292 | orchestrator | Wednesday 18 February 2026 05:23:40 +0000 (0:00:02.122) 0:03:47.192 **** 2026-02-18 05:23:45.528333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 05:23:47.240872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 05:23:47.241011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-18 05:23:47.241051 | orchestrator | 2026-02-18 05:23:47.241065 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-18 05:23:47.241077 | orchestrator | Wednesday 18 February 2026 05:23:45 +0000 (0:00:04.758) 0:03:51.950 **** 2026-02-18 05:23:47.241091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 05:23:47.241104 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:47.241201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 05:23:56.060694 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:56.060823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-18 05:23:56.060868 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:56.060880 | orchestrator | 2026-02-18 05:23:56.060892 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-18 05:23:56.060943 | orchestrator | Wednesday 18 February 2026 05:23:47 +0000 (0:00:01.717) 0:03:53.668 **** 2026-02-18 05:23:56.060957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-18 05:23:56.060983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 05:23:56.060996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-18 05:23:56.061008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 05:23:56.061018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-18 05:23:56.061030 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:56.061057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-18 05:23:56.061068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 05:23:56.061078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-18 05:23:56.061089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 05:23:56.061099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-18 05:23:56.061109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-18 05:23:56.061128 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:56.061138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 05:23:56.061148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-18 05:23:56.061163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-18 05:23:56.061174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-18 05:23:56.061183 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:56.061221 | orchestrator | 2026-02-18 05:23:56.061234 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-18 05:23:56.061247 | orchestrator | Wednesday 18 February 2026 05:23:49 +0000 (0:00:02.028) 0:03:55.696 **** 2026-02-18 05:23:56.061259 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:56.061271 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:56.061283 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:56.061295 | orchestrator | 2026-02-18 05:23:56.061307 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-18 05:23:56.061319 | orchestrator | Wednesday 18 February 2026 05:23:51 +0000 (0:00:02.243) 0:03:57.940 **** 2026-02-18 05:23:56.061332 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:23:56.061343 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:23:56.061355 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:23:56.061366 | orchestrator | 2026-02-18 05:23:56.061379 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-18 05:23:56.061391 | orchestrator | Wednesday 18 February 2026 05:23:54 +0000 (0:00:02.912) 0:04:00.853 **** 2026-02-18 05:23:56.061402 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:23:56.061414 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:23:56.061426 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:23:56.061438 | orchestrator | 2026-02-18 05:23:56.061450 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-18 05:23:56.061462 | orchestrator | Wednesday 18 February 2026 05:23:55 +0000 (0:00:01.408) 0:04:02.261 **** 2026-02-18 05:23:56.061500 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:05.937750 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:05.937864 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:05.937880 | orchestrator | 2026-02-18 05:24:05.937894 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-18 05:24:05.937907 | orchestrator | Wednesday 18 February 2026 05:23:57 +0000 (0:00:01.368) 0:04:03.629 **** 2026-02-18 05:24:05.937919 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:24:05.937930 | orchestrator | 2026-02-18 05:24:05.937941 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-18 05:24:05.937952 | orchestrator | Wednesday 18 February 2026 05:23:58 +0000 (0:00:01.772) 0:04:05.401 **** 2026-02-18 05:24:05.937968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-18 05:24:05.938014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 05:24:05.938089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 05:24:05.938116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-18 05:24:05.938150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-18 05:24:05.938211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 05:24:05.938228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 05:24:05.938240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 05:24:05.938257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 05:24:05.938271 | orchestrator | 2026-02-18 05:24:05.938284 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-18 05:24:05.938297 | orchestrator | Wednesday 18 February 2026 05:24:03 +0000 (0:00:04.871) 0:04:10.273 **** 2026-02-18 05:24:05.938321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-18 05:24:07.562217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 05:24:07.562324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 05:24:07.562341 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:07.562375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-18 05:24:07.562390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 05:24:07.562402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 05:24:07.562435 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:07.562468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-18 05:24:07.562481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-18 05:24:07.562492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-18 05:24:07.562504 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:07.562515 | orchestrator | 2026-02-18 05:24:07.562527 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-18 05:24:07.562540 | orchestrator | Wednesday 18 February 2026 05:24:05 +0000 (0:00:02.087) 0:04:12.361 **** 2026-02-18 05:24:07.562558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-18 05:24:07.562573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-18 05:24:07.562585 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:07.562597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-18 05:24:07.562609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-18 05:24:07.562628 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:07.562640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-18 05:24:07.562652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-18 05:24:07.562665 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:07.562678 | orchestrator | 2026-02-18 05:24:07.562691 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-18 05:24:07.562710 | orchestrator | Wednesday 18 February 2026 05:24:07 +0000 (0:00:01.622) 0:04:13.983 **** 2026-02-18 05:24:22.993521 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:24:22.993651 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:24:22.993667 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:24:22.993677 | orchestrator | 2026-02-18 05:24:22.993688 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-18 05:24:22.993698 | orchestrator | Wednesday 18 February 2026 05:24:09 +0000 (0:00:02.258) 0:04:16.242 **** 2026-02-18 05:24:22.993707 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:24:22.993716 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:24:22.993724 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:24:22.993733 | orchestrator | 2026-02-18 05:24:22.993741 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-18 05:24:22.993750 | orchestrator | Wednesday 18 February 2026 05:24:12 +0000 (0:00:03.174) 0:04:19.417 **** 2026-02-18 05:24:22.993758 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:22.993768 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:22.993777 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:22.993785 | orchestrator | 2026-02-18 05:24:22.993794 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-18 05:24:22.993802 | orchestrator | Wednesday 18 February 2026 05:24:14 +0000 (0:00:01.425) 0:04:20.842 **** 2026-02-18 05:24:22.993811 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:24:22.993819 | orchestrator | 2026-02-18 05:24:22.993827 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-18 05:24:22.993836 | orchestrator | Wednesday 18 February 2026 05:24:16 +0000 (0:00:01.784) 0:04:22.626 **** 2026-02-18 05:24:22.993849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:24:22.993878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:24:22.993908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:24:22.993934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:24:22.993944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:24:22.993957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:24:22.993973 | orchestrator | 2026-02-18 05:24:22.993981 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-18 05:24:22.993990 | orchestrator | Wednesday 18 February 2026 05:24:21 +0000 (0:00:05.056) 0:04:27.683 **** 2026-02-18 05:24:22.993999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:24:22.994013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:24:35.942496 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:35.942649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:24:35.942682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:24:35.942735 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:35.942775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:24:35.942796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:24:35.942817 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:35.942838 | orchestrator | 2026-02-18 05:24:35.942861 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-18 05:24:35.942883 | orchestrator | Wednesday 18 February 2026 05:24:22 +0000 (0:00:01.731) 0:04:29.415 **** 2026-02-18 05:24:35.942928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:35.942954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:35.942975 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:35.942995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:35.943018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:35.943040 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:35.943062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:35.943089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:35.943218 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:35.943244 | orchestrator | 2026-02-18 05:24:35.943272 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-18 05:24:35.943297 | orchestrator | Wednesday 18 February 2026 05:24:24 +0000 (0:00:02.020) 0:04:31.435 **** 2026-02-18 05:24:35.943321 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:24:35.943345 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:24:35.943368 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:24:35.943387 | orchestrator | 2026-02-18 05:24:35.943406 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-18 05:24:35.943426 | orchestrator | Wednesday 18 February 2026 05:24:27 +0000 (0:00:02.231) 0:04:33.666 **** 2026-02-18 05:24:35.943444 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:24:35.943463 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:24:35.943480 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:24:35.943498 | orchestrator | 2026-02-18 05:24:35.943526 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-18 05:24:35.943545 | orchestrator | Wednesday 18 February 2026 05:24:30 +0000 (0:00:02.942) 0:04:36.609 **** 2026-02-18 05:24:35.943563 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:24:35.943581 | orchestrator | 2026-02-18 05:24:35.943600 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-18 05:24:35.943618 | orchestrator | Wednesday 18 February 2026 05:24:32 +0000 (0:00:02.132) 0:04:38.742 **** 2026-02-18 05:24:35.943639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:24:35.943660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:24:35.943699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:24:37.643506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:24:37.643522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 05:24:37.643598 | orchestrator | 2026-02-18 05:24:37.643607 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-18 05:24:37.643615 | orchestrator | Wednesday 18 February 2026 05:24:37 +0000 (0:00:04.715) 0:04:43.458 **** 2026-02-18 05:24:37.643625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:24:37.643638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.831908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832063 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:40.832172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:24:40.832199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832313 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:40.832330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:24:40.832356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-18 05:24:40.832408 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:40.832426 | orchestrator | 2026-02-18 05:24:40.832457 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-18 05:24:40.832477 | orchestrator | Wednesday 18 February 2026 05:24:38 +0000 (0:00:01.691) 0:04:45.149 **** 2026-02-18 05:24:40.832497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:40.832519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:40.832536 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:40.832550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:40.832572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:56.450616 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:56.450736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:56.450756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:24:56.450771 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:24:56.450783 | orchestrator | 2026-02-18 05:24:56.450795 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-18 05:24:56.450808 | orchestrator | Wednesday 18 February 2026 05:24:40 +0000 (0:00:02.103) 0:04:47.252 **** 2026-02-18 05:24:56.450819 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:24:56.450831 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:24:56.450842 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:24:56.450853 | orchestrator | 2026-02-18 05:24:56.450871 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-18 05:24:56.450889 | orchestrator | Wednesday 18 February 2026 05:24:43 +0000 (0:00:02.310) 0:04:49.563 **** 2026-02-18 05:24:56.450909 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:24:56.450927 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:24:56.450946 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:24:56.450966 | orchestrator | 2026-02-18 05:24:56.450986 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-18 05:24:56.451017 | orchestrator | Wednesday 18 February 2026 05:24:46 +0000 (0:00:03.118) 0:04:52.682 **** 2026-02-18 05:24:56.451031 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:24:56.451042 | orchestrator | 2026-02-18 05:24:56.451088 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-18 05:24:56.451100 | orchestrator | Wednesday 18 February 2026 05:24:48 +0000 (0:00:02.574) 0:04:55.256 **** 2026-02-18 05:24:56.451111 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:24:56.451123 | orchestrator | 2026-02-18 05:24:56.451134 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-18 05:24:56.451145 | orchestrator | Wednesday 18 February 2026 05:24:52 +0000 (0:00:04.015) 0:04:59.272 **** 2026-02-18 05:24:56.451160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:24:56.451218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 05:24:56.451232 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:24:56.451270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:24:56.451292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 05:24:56.451304 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:24:56.451325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:25:00.223548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 05:25:00.223666 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:00.223684 | orchestrator | 2026-02-18 05:25:00.223696 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-18 05:25:00.223727 | orchestrator | Wednesday 18 February 2026 05:24:56 +0000 (0:00:03.600) 0:05:02.872 **** 2026-02-18 05:25:00.223742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:25:00.223779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 05:25:00.223792 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:00.223829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:25:00.223852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 05:25:00.223864 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:00.223876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:25:00.223896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-18 05:25:16.641717 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:16.641835 | orchestrator | 2026-02-18 05:25:16.641852 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-18 05:25:16.641866 | orchestrator | Wednesday 18 February 2026 05:25:00 +0000 (0:00:03.775) 0:05:06.648 **** 2026-02-18 05:25:16.641896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 05:25:16.641934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 05:25:16.641947 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:16.641959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 05:25:16.641971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 05:25:16.641982 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:16.641994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 05:25:16.642129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-18 05:25:16.642144 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:16.642155 | orchestrator | 2026-02-18 05:25:16.642167 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-18 05:25:16.642178 | orchestrator | Wednesday 18 February 2026 05:25:04 +0000 (0:00:04.114) 0:05:10.762 **** 2026-02-18 05:25:16.642190 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:25:16.642219 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:25:16.642231 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:25:16.642242 | orchestrator | 2026-02-18 05:25:16.642255 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-18 05:25:16.642267 | orchestrator | Wednesday 18 February 2026 05:25:07 +0000 (0:00:02.935) 0:05:13.698 **** 2026-02-18 05:25:16.642291 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:16.642303 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:16.642316 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:16.642327 | orchestrator | 2026-02-18 05:25:16.642340 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-18 05:25:16.642352 | orchestrator | Wednesday 18 February 2026 05:25:09 +0000 (0:00:02.679) 0:05:16.377 **** 2026-02-18 05:25:16.642364 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:16.642376 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:16.642388 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:16.642400 | orchestrator | 2026-02-18 05:25:16.642418 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-18 05:25:16.642431 | orchestrator | Wednesday 18 February 2026 05:25:11 +0000 (0:00:01.386) 0:05:17.764 **** 2026-02-18 05:25:16.642443 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:25:16.642471 | orchestrator | 2026-02-18 05:25:16.642494 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-18 05:25:16.642507 | orchestrator | Wednesday 18 February 2026 05:25:13 +0000 (0:00:02.247) 0:05:20.011 **** 2026-02-18 05:25:16.642521 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 05:25:16.642536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 05:25:16.642550 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 05:25:16.642563 | orchestrator | 2026-02-18 05:25:16.642574 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-18 05:25:16.642585 | orchestrator | Wednesday 18 February 2026 05:25:16 +0000 (0:00:02.518) 0:05:22.529 **** 2026-02-18 05:25:16.642604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 05:25:31.823478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 05:25:31.823593 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:31.823612 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:31.823625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 05:25:31.823637 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:31.823648 | orchestrator | 2026-02-18 05:25:31.823661 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-18 05:25:31.823673 | orchestrator | Wednesday 18 February 2026 05:25:17 +0000 (0:00:01.814) 0:05:24.343 **** 2026-02-18 05:25:31.823685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-18 05:25:31.823698 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:31.823709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-18 05:25:31.823720 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:31.823731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-18 05:25:31.823742 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:31.823753 | orchestrator | 2026-02-18 05:25:31.823764 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-18 05:25:31.823775 | orchestrator | Wednesday 18 February 2026 05:25:19 +0000 (0:00:01.485) 0:05:25.829 **** 2026-02-18 05:25:31.823786 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:31.823818 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:31.823830 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:31.823841 | orchestrator | 2026-02-18 05:25:31.823852 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-18 05:25:31.823862 | orchestrator | Wednesday 18 February 2026 05:25:20 +0000 (0:00:01.492) 0:05:27.321 **** 2026-02-18 05:25:31.823873 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:31.823884 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:31.823894 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:31.823905 | orchestrator | 2026-02-18 05:25:31.823916 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-18 05:25:31.823926 | orchestrator | Wednesday 18 February 2026 05:25:23 +0000 (0:00:02.577) 0:05:29.899 **** 2026-02-18 05:25:31.823937 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:31.823948 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:31.823958 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:31.823996 | orchestrator | 2026-02-18 05:25:31.824010 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-18 05:25:31.824023 | orchestrator | Wednesday 18 February 2026 05:25:24 +0000 (0:00:01.438) 0:05:31.338 **** 2026-02-18 05:25:31.824035 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:25:31.824047 | orchestrator | 2026-02-18 05:25:31.824060 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-18 05:25:31.824072 | orchestrator | Wednesday 18 February 2026 05:25:27 +0000 (0:00:02.197) 0:05:33.536 **** 2026-02-18 05:25:31.824114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:25:31.824132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:31.824148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-18 05:25:31.824172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-18 05:25:31.824197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:31.972314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:31.972409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:31.972426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 05:25:31.972439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:31.972472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:31.972484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-18 05:25:31.972521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:25:31.972536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:31.972547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:31.972566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:31.972578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-18 05:25:31.972601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 05:25:32.191848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-18 05:25:32.191968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:32.192034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:32.192047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:32.192075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:25:32.192106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:32.192119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 05:25:32.192140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:32.192152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:32.192164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:32.192189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-18 05:25:33.556293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-18 05:25:33.556427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-18 05:25:33.556451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:33.556465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:33.556479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:33.556516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:33.556552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 05:25:33.556577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:33.556590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:33.556602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 05:25:33.556620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:33.556639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:34.723632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-18 05:25:34.723738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:34.723755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.723770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 05:25:34.723801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:34.723814 | orchestrator | 2026-02-18 05:25:34.723828 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-18 05:25:34.723840 | orchestrator | Wednesday 18 February 2026 05:25:33 +0000 (0:00:06.446) 0:05:39.982 **** 2026-02-18 05:25:34.723894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:25:34.723909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.723921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-18 05:25:34.723940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-18 05:25:34.723960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.818638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:34.818738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:34.818755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 05:25:34.818769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:25:34.818800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:34.818854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.818867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.818880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-18 05:25:34.818892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-18 05:25:34.818904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-18 05:25:34.818923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:34.818942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.937454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.937576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:34.937602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 05:25:34.937686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:34.937724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:34.937737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 05:25:34.937769 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:34.937784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:34.937798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:34.937810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-18 05:25:34.937827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:25:34.937847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:34.937866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:36.299441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:36.299538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-18 05:25:36.299592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 05:25:36.299605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-18 05:25:36.299632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:36.299643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:36.299654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:36.299665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:36.299692 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:36.299710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-18 05:25:36.299722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:36.299740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:52.381062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-18 05:25:52.381146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-18 05:25:52.381153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-18 05:25:52.381182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-18 05:25:52.381188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-18 05:25:52.381192 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:52.381198 | orchestrator | 2026-02-18 05:25:52.381203 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-18 05:25:52.381209 | orchestrator | Wednesday 18 February 2026 05:25:36 +0000 (0:00:02.743) 0:05:42.726 **** 2026-02-18 05:25:52.381213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:25:52.381228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:25:52.381234 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:25:52.381238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:25:52.381241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:25:52.381245 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:25:52.381249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:25:52.381261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:25:52.381265 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:25:52.381269 | orchestrator | 2026-02-18 05:25:52.381272 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-18 05:25:52.381276 | orchestrator | Wednesday 18 February 2026 05:25:39 +0000 (0:00:03.010) 0:05:45.737 **** 2026-02-18 05:25:52.381280 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:25:52.381285 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:25:52.381289 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:25:52.381293 | orchestrator | 2026-02-18 05:25:52.381296 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-18 05:25:52.381300 | orchestrator | Wednesday 18 February 2026 05:25:41 +0000 (0:00:02.333) 0:05:48.070 **** 2026-02-18 05:25:52.381304 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:25:52.381307 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:25:52.381312 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:25:52.381316 | orchestrator | 2026-02-18 05:25:52.381319 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-18 05:25:52.381323 | orchestrator | Wednesday 18 February 2026 05:25:45 +0000 (0:00:03.430) 0:05:51.501 **** 2026-02-18 05:25:52.381327 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:25:52.381331 | orchestrator | 2026-02-18 05:25:52.381334 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-18 05:25:52.381341 | orchestrator | Wednesday 18 February 2026 05:25:47 +0000 (0:00:02.559) 0:05:54.060 **** 2026-02-18 05:25:52.381345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-18 05:25:52.381354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-18 05:26:09.434447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-18 05:26:09.434592 | orchestrator | 2026-02-18 05:26:09.434611 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-18 05:26:09.434624 | orchestrator | Wednesday 18 February 2026 05:25:52 +0000 (0:00:04.742) 0:05:58.803 **** 2026-02-18 05:26:09.434653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-18 05:26:09.434667 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:09.434681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-18 05:26:09.434693 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:09.434723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-18 05:26:09.434744 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:26:09.434756 | orchestrator | 2026-02-18 05:26:09.434767 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-18 05:26:09.434779 | orchestrator | Wednesday 18 February 2026 05:25:53 +0000 (0:00:01.586) 0:06:00.390 **** 2026-02-18 05:26:09.434791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:26:09.434806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:26:09.434819 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:09.434830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:26:09.434842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:26:09.434853 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:09.434870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:26:09.434881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:26:09.434922 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:26:09.434944 | orchestrator | 2026-02-18 05:26:09.434956 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-18 05:26:09.434967 | orchestrator | Wednesday 18 February 2026 05:25:55 +0000 (0:00:01.942) 0:06:02.333 **** 2026-02-18 05:26:09.434980 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:26:09.434994 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:26:09.435006 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:26:09.435018 | orchestrator | 2026-02-18 05:26:09.435032 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-18 05:26:09.435045 | orchestrator | Wednesday 18 February 2026 05:25:58 +0000 (0:00:02.341) 0:06:04.675 **** 2026-02-18 05:26:09.435058 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:26:09.435070 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:26:09.435082 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:26:09.435095 | orchestrator | 2026-02-18 05:26:09.435108 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-18 05:26:09.435120 | orchestrator | Wednesday 18 February 2026 05:26:01 +0000 (0:00:02.888) 0:06:07.563 **** 2026-02-18 05:26:09.435133 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:26:09.435153 | orchestrator | 2026-02-18 05:26:09.435167 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-18 05:26:09.435180 | orchestrator | Wednesday 18 February 2026 05:26:03 +0000 (0:00:02.488) 0:06:10.051 **** 2026-02-18 05:26:09.435203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:26:10.578251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:26:10.578334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:26:10.578341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:26:10.578361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:26:10.578377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:26:10.578382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:26:10.578396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:26:10.578401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:26:10.578413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:26:10.578426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:26:11.265165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:26:11.265249 | orchestrator | 2026-02-18 05:26:11.265259 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-18 05:26:11.265268 | orchestrator | Wednesday 18 February 2026 05:26:10 +0000 (0:00:06.956) 0:06:17.008 **** 2026-02-18 05:26:11.265293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:26:11.265321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:26:11.265329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:26:11.265348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:26:11.265356 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:11.265365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:26:11.265376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:26:11.265388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:26:11.265396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:26:11.265403 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:11.265415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:26:30.653110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:26:30.653275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-18 05:26:30.653298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-18 05:26:30.653312 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:26:30.653326 | orchestrator | 2026-02-18 05:26:30.653339 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-18 05:26:30.653352 | orchestrator | Wednesday 18 February 2026 05:26:12 +0000 (0:00:01.895) 0:06:18.903 **** 2026-02-18 05:26:30.653365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653415 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:30.653426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653530 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:30.653559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:26:30.653629 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:26:30.653642 | orchestrator | 2026-02-18 05:26:30.653654 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-18 05:26:30.653667 | orchestrator | Wednesday 18 February 2026 05:26:15 +0000 (0:00:02.625) 0:06:21.528 **** 2026-02-18 05:26:30.653680 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:26:30.653693 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:26:30.653705 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:26:30.653717 | orchestrator | 2026-02-18 05:26:30.653729 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-18 05:26:30.653742 | orchestrator | Wednesday 18 February 2026 05:26:17 +0000 (0:00:02.264) 0:06:23.792 **** 2026-02-18 05:26:30.653754 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:26:30.653766 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:26:30.653778 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:26:30.653790 | orchestrator | 2026-02-18 05:26:30.653804 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-18 05:26:30.653816 | orchestrator | Wednesday 18 February 2026 05:26:20 +0000 (0:00:03.173) 0:06:26.966 **** 2026-02-18 05:26:30.653831 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:26:30.653851 | orchestrator | 2026-02-18 05:26:30.653897 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-18 05:26:30.653916 | orchestrator | Wednesday 18 February 2026 05:26:23 +0000 (0:00:02.823) 0:06:29.789 **** 2026-02-18 05:26:30.653935 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-18 05:26:30.653955 | orchestrator | 2026-02-18 05:26:30.653974 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-18 05:26:30.653992 | orchestrator | Wednesday 18 February 2026 05:26:25 +0000 (0:00:01.737) 0:06:31.526 **** 2026-02-18 05:26:30.654012 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-18 05:26:30.654097 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-18 05:26:30.654133 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-18 05:26:50.281284 | orchestrator | 2026-02-18 05:26:50.281394 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-18 05:26:50.281410 | orchestrator | Wednesday 18 February 2026 05:26:30 +0000 (0:00:05.542) 0:06:37.068 **** 2026-02-18 05:26:50.281438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.281452 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:50.281465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.281475 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:50.281485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.281495 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:26:50.281505 | orchestrator | 2026-02-18 05:26:50.281514 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-18 05:26:50.281525 | orchestrator | Wednesday 18 February 2026 05:26:33 +0000 (0:00:02.419) 0:06:39.488 **** 2026-02-18 05:26:50.281536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 05:26:50.281549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 05:26:50.281560 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:50.281570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 05:26:50.281580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 05:26:50.281610 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:50.281620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 05:26:50.281631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-18 05:26:50.281640 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:26:50.281650 | orchestrator | 2026-02-18 05:26:50.281660 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-18 05:26:50.281669 | orchestrator | Wednesday 18 February 2026 05:26:35 +0000 (0:00:02.585) 0:06:42.073 **** 2026-02-18 05:26:50.281679 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:26:50.281689 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:26:50.281699 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:26:50.281708 | orchestrator | 2026-02-18 05:26:50.281717 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-18 05:26:50.281727 | orchestrator | Wednesday 18 February 2026 05:26:39 +0000 (0:00:03.746) 0:06:45.820 **** 2026-02-18 05:26:50.281737 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:26:50.281746 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:26:50.281771 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:26:50.281781 | orchestrator | 2026-02-18 05:26:50.281791 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-18 05:26:50.281801 | orchestrator | Wednesday 18 February 2026 05:26:43 +0000 (0:00:03.912) 0:06:49.732 **** 2026-02-18 05:26:50.281811 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-18 05:26:50.281880 | orchestrator | 2026-02-18 05:26:50.281897 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-18 05:26:50.281909 | orchestrator | Wednesday 18 February 2026 05:26:45 +0000 (0:00:01.775) 0:06:51.508 **** 2026-02-18 05:26:50.281921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.281933 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:50.281945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.281956 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:50.281968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.281987 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:26:50.281998 | orchestrator | 2026-02-18 05:26:50.282009 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-18 05:26:50.282074 | orchestrator | Wednesday 18 February 2026 05:26:47 +0000 (0:00:02.579) 0:06:54.088 **** 2026-02-18 05:26:50.282087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.282099 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:26:50.282111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:26:50.282122 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:26:50.282143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-18 05:27:24.728115 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:24.728259 | orchestrator | 2026-02-18 05:27:24.728288 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-18 05:27:24.728310 | orchestrator | Wednesday 18 February 2026 05:26:50 +0000 (0:00:02.607) 0:06:56.696 **** 2026-02-18 05:27:24.728331 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:24.728350 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:24.728368 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:24.728387 | orchestrator | 2026-02-18 05:27:24.728405 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-18 05:27:24.728444 | orchestrator | Wednesday 18 February 2026 05:26:52 +0000 (0:00:02.525) 0:06:59.222 **** 2026-02-18 05:27:24.728465 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:27:24.728485 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:27:24.728504 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:27:24.728522 | orchestrator | 2026-02-18 05:27:24.728541 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-18 05:27:24.728558 | orchestrator | Wednesday 18 February 2026 05:26:56 +0000 (0:00:03.641) 0:07:02.864 **** 2026-02-18 05:27:24.728577 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:27:24.728596 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:27:24.728615 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:27:24.728633 | orchestrator | 2026-02-18 05:27:24.728652 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-18 05:27:24.728670 | orchestrator | Wednesday 18 February 2026 05:27:00 +0000 (0:00:04.008) 0:07:06.872 **** 2026-02-18 05:27:24.728690 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-18 05:27:24.728739 | orchestrator | 2026-02-18 05:27:24.728783 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-18 05:27:24.728803 | orchestrator | Wednesday 18 February 2026 05:27:02 +0000 (0:00:02.363) 0:07:09.235 **** 2026-02-18 05:27:24.728824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 05:27:24.728849 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:24.728870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 05:27:24.728891 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:24.728910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 05:27:24.728929 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:24.728949 | orchestrator | 2026-02-18 05:27:24.728968 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-18 05:27:24.728988 | orchestrator | Wednesday 18 February 2026 05:27:05 +0000 (0:00:02.584) 0:07:11.820 **** 2026-02-18 05:27:24.729008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 05:27:24.729029 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:24.729072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 05:27:24.729092 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:24.729120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-18 05:27:24.729162 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:24.729182 | orchestrator | 2026-02-18 05:27:24.729200 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-18 05:27:24.729217 | orchestrator | Wednesday 18 February 2026 05:27:07 +0000 (0:00:02.453) 0:07:14.273 **** 2026-02-18 05:27:24.729236 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:24.729256 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:24.729275 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:24.729293 | orchestrator | 2026-02-18 05:27:24.729311 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-18 05:27:24.729329 | orchestrator | Wednesday 18 February 2026 05:27:10 +0000 (0:00:02.618) 0:07:16.892 **** 2026-02-18 05:27:24.729347 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:27:24.729366 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:27:24.729384 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:27:24.729402 | orchestrator | 2026-02-18 05:27:24.729419 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-18 05:27:24.729437 | orchestrator | Wednesday 18 February 2026 05:27:13 +0000 (0:00:03.436) 0:07:20.328 **** 2026-02-18 05:27:24.729456 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:27:24.729475 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:27:24.729493 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:27:24.729511 | orchestrator | 2026-02-18 05:27:24.729529 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-18 05:27:24.729547 | orchestrator | Wednesday 18 February 2026 05:27:18 +0000 (0:00:04.540) 0:07:24.868 **** 2026-02-18 05:27:24.729565 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:27:24.729583 | orchestrator | 2026-02-18 05:27:24.729602 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-18 05:27:24.729619 | orchestrator | Wednesday 18 February 2026 05:27:20 +0000 (0:00:02.520) 0:07:27.389 **** 2026-02-18 05:27:24.729638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 05:27:24.729659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 05:27:24.729690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 05:27:25.906430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 05:27:25.906529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:27:25.906546 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 05:27:25.906559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 05:27:25.906573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 05:27:25.906603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 05:27:25.906644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:27:25.906657 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-18 05:27:25.906670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 05:27:25.906682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 05:27:25.906694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 05:27:25.906713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:27:25.906726 | orchestrator | 2026-02-18 05:27:25.906747 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-18 05:27:26.894845 | orchestrator | Wednesday 18 February 2026 05:27:25 +0000 (0:00:04.944) 0:07:32.334 **** 2026-02-18 05:27:26.894999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 05:27:26.895025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 05:27:26.895039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 05:27:26.895052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 05:27:26.895064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:27:26.895098 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:26.895138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 05:27:26.895152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 05:27:26.895164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 05:27:26.895175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 05:27:26.895186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:27:26.895205 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:26.895217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-18 05:27:26.895242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-18 05:27:44.766334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-18 05:27:44.766454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-18 05:27:44.766473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-18 05:27:44.766487 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:44.766501 | orchestrator | 2026-02-18 05:27:44.766514 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-18 05:27:44.766527 | orchestrator | Wednesday 18 February 2026 05:27:28 +0000 (0:00:02.173) 0:07:34.507 **** 2026-02-18 05:27:44.766539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 05:27:44.766577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 05:27:44.766591 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:44.766602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 05:27:44.766613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 05:27:44.766624 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:44.766635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 05:27:44.766647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-18 05:27:44.766658 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:44.766668 | orchestrator | 2026-02-18 05:27:44.766680 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-18 05:27:44.766691 | orchestrator | Wednesday 18 February 2026 05:27:29 +0000 (0:00:01.892) 0:07:36.400 **** 2026-02-18 05:27:44.766702 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:27:44.766714 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:27:44.766780 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:27:44.766792 | orchestrator | 2026-02-18 05:27:44.766819 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-18 05:27:44.766830 | orchestrator | Wednesday 18 February 2026 05:27:32 +0000 (0:00:02.261) 0:07:38.662 **** 2026-02-18 05:27:44.766842 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:27:44.766853 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:27:44.766883 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:27:44.766896 | orchestrator | 2026-02-18 05:27:44.766909 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-18 05:27:44.766922 | orchestrator | Wednesday 18 February 2026 05:27:35 +0000 (0:00:03.056) 0:07:41.719 **** 2026-02-18 05:27:44.766935 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:27:44.766948 | orchestrator | 2026-02-18 05:27:44.766960 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-18 05:27:44.766973 | orchestrator | Wednesday 18 February 2026 05:27:37 +0000 (0:00:02.478) 0:07:44.198 **** 2026-02-18 05:27:44.766987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:27:44.767006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:27:44.767039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:27:44.767080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:27:46.924191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:27:46.924328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:27:46.924347 | orchestrator | 2026-02-18 05:27:46.924363 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-18 05:27:46.924375 | orchestrator | Wednesday 18 February 2026 05:27:44 +0000 (0:00:06.989) 0:07:51.188 **** 2026-02-18 05:27:46.924388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:27:46.924435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:27:46.924449 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:46.924462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:27:46.924482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:27:46.924494 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:46.924505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:27:46.924533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:27:58.004303 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:58.004419 | orchestrator | 2026-02-18 05:27:58.004437 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-18 05:27:58.004454 | orchestrator | Wednesday 18 February 2026 05:27:46 +0000 (0:00:02.157) 0:07:53.345 **** 2026-02-18 05:27:58.004475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:27:58.004497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-18 05:27:58.004518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-18 05:27:58.004540 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:58.004558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:27:58.004575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-18 05:27:58.004594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-18 05:27:58.004611 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:58.004630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:27:58.004647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-18 05:27:58.004664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-18 05:27:58.004682 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:58.004726 | orchestrator | 2026-02-18 05:27:58.004766 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-18 05:27:58.004788 | orchestrator | Wednesday 18 February 2026 05:27:48 +0000 (0:00:01.803) 0:07:55.148 **** 2026-02-18 05:27:58.004804 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:58.004822 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:58.004841 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:58.004860 | orchestrator | 2026-02-18 05:27:58.004878 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-18 05:27:58.004896 | orchestrator | Wednesday 18 February 2026 05:27:50 +0000 (0:00:01.569) 0:07:56.718 **** 2026-02-18 05:27:58.004943 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:27:58.004964 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:27:58.004982 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:27:58.005001 | orchestrator | 2026-02-18 05:27:58.005021 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-18 05:27:58.005040 | orchestrator | Wednesday 18 February 2026 05:27:53 +0000 (0:00:02.748) 0:07:59.467 **** 2026-02-18 05:27:58.005059 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:27:58.005079 | orchestrator | 2026-02-18 05:27:58.005097 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-18 05:27:58.005115 | orchestrator | Wednesday 18 February 2026 05:27:55 +0000 (0:00:02.291) 0:08:01.758 **** 2026-02-18 05:27:58.005168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-18 05:27:58.005195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 05:27:58.005216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:27:58.005236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:27:58.005266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 05:27:58.005313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-18 05:28:00.022122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-18 05:28:00.022231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 05:28:00.022258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:00.022278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 05:28:00.022352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:00.022377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:00.022411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 05:28:00.022424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:00.022435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 05:28:00.022449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:28:00.022468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-18 05:28:00.022491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:00.022512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.314384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 05:28:02.314491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:28:02.314510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-18 05:28:02.314563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.314576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.314589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 05:28:02.314620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:28:02.314634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-18 05:28:02.314661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.314673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.314684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 05:28:02.314766 | orchestrator | 2026-02-18 05:28:02.314782 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-18 05:28:02.314794 | orchestrator | Wednesday 18 February 2026 05:28:01 +0000 (0:00:05.976) 0:08:07.735 **** 2026-02-18 05:28:02.314816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-18 05:28:02.507453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 05:28:02.507548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.507586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.507613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 05:28:02.507626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:28:02.507655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-18 05:28:02.507669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.507681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.507773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 05:28:02.507789 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:02.507808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-18 05:28:02.507821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 05:28:02.507832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:02.507854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:03.753828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 05:28:03.753979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:28:03.753995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-18 05:28:03.754005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:03.754051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:03.754075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-18 05:28:03.754089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 05:28:03.754101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-18 05:28:03.754109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:03.754116 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:03.754125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:03.754132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-18 05:28:03.754145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:28:16.412541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-18 05:28:16.412666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:16.412750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:28:16.412765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-18 05:28:16.412777 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:16.412791 | orchestrator | 2026-02-18 05:28:16.412804 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-18 05:28:16.412816 | orchestrator | Wednesday 18 February 2026 05:28:03 +0000 (0:00:02.448) 0:08:10.184 **** 2026-02-18 05:28:16.412829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-18 05:28:16.412843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-18 05:28:16.412879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:28:16.412910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:28:16.412923 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:16.412935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-18 05:28:16.412946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-18 05:28:16.412964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:28:16.412976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:28:16.412987 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:16.412998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-18 05:28:16.413010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-18 05:28:16.413021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:28:16.413033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-18 05:28:16.413052 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:16.413065 | orchestrator | 2026-02-18 05:28:16.413078 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-18 05:28:16.413091 | orchestrator | Wednesday 18 February 2026 05:28:05 +0000 (0:00:02.003) 0:08:12.187 **** 2026-02-18 05:28:16.413104 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:16.413116 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:16.413129 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:16.413141 | orchestrator | 2026-02-18 05:28:16.413153 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-18 05:28:16.413166 | orchestrator | Wednesday 18 February 2026 05:28:07 +0000 (0:00:01.949) 0:08:14.136 **** 2026-02-18 05:28:16.413179 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:16.413191 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:16.413203 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:16.413216 | orchestrator | 2026-02-18 05:28:16.413228 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-18 05:28:16.413241 | orchestrator | Wednesday 18 February 2026 05:28:10 +0000 (0:00:02.490) 0:08:16.627 **** 2026-02-18 05:28:16.413253 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:28:16.413265 | orchestrator | 2026-02-18 05:28:16.413277 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-18 05:28:16.413290 | orchestrator | Wednesday 18 February 2026 05:28:12 +0000 (0:00:02.249) 0:08:18.876 **** 2026-02-18 05:28:16.413312 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:28:34.073828 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:28:34.073953 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:28:34.073992 | orchestrator | 2026-02-18 05:28:34.074006 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-18 05:28:34.074104 | orchestrator | Wednesday 18 February 2026 05:28:16 +0000 (0:00:03.953) 0:08:22.829 **** 2026-02-18 05:28:34.074121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:28:34.074134 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:34.074165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:28:34.074178 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:34.074197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:28:34.074219 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:34.074230 | orchestrator | 2026-02-18 05:28:34.074241 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-18 05:28:34.074252 | orchestrator | Wednesday 18 February 2026 05:28:17 +0000 (0:00:01.571) 0:08:24.401 **** 2026-02-18 05:28:34.074264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-18 05:28:34.074277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-18 05:28:34.074287 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:34.074298 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:34.074315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-18 05:28:34.074334 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:34.074352 | orchestrator | 2026-02-18 05:28:34.074372 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-18 05:28:34.074391 | orchestrator | Wednesday 18 February 2026 05:28:19 +0000 (0:00:01.449) 0:08:25.851 **** 2026-02-18 05:28:34.074411 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:34.074423 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:34.074434 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:34.074444 | orchestrator | 2026-02-18 05:28:34.074455 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-18 05:28:34.074466 | orchestrator | Wednesday 18 February 2026 05:28:21 +0000 (0:00:01.909) 0:08:27.760 **** 2026-02-18 05:28:34.074476 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:34.074487 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:34.074498 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:34.074508 | orchestrator | 2026-02-18 05:28:34.074519 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-18 05:28:34.074530 | orchestrator | Wednesday 18 February 2026 05:28:23 +0000 (0:00:02.248) 0:08:30.009 **** 2026-02-18 05:28:34.074540 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:28:34.074552 | orchestrator | 2026-02-18 05:28:34.074563 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-18 05:28:34.074573 | orchestrator | Wednesday 18 February 2026 05:28:25 +0000 (0:00:02.408) 0:08:32.418 **** 2026-02-18 05:28:34.074585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-18 05:28:34.074614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-18 05:28:35.869964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-18 05:28:35.870138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-18 05:28:35.870157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-18 05:28:35.870189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-18 05:28:35.870229 | orchestrator | 2026-02-18 05:28:35.870243 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-18 05:28:35.870256 | orchestrator | Wednesday 18 February 2026 05:28:34 +0000 (0:00:08.079) 0:08:40.498 **** 2026-02-18 05:28:35.870269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-18 05:28:35.870323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-18 05:28:35.870337 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:35.870356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-18 05:28:35.870385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-18 05:28:57.660081 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:57.660187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-18 05:28:57.660213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-18 05:28:57.660222 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:57.660230 | orchestrator | 2026-02-18 05:28:57.660246 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-18 05:28:57.660255 | orchestrator | Wednesday 18 February 2026 05:28:35 +0000 (0:00:01.798) 0:08:42.296 **** 2026-02-18 05:28:57.660265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-18 05:28:57.660290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-18 05:28:57.660307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:28:57.660313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:28:57.660318 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:57.660322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-18 05:28:57.660327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-18 05:28:57.660344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:28:57.660349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:28:57.660353 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:57.660357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-18 05:28:57.660362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-18 05:28:57.660366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:28:57.660371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-18 05:28:57.660375 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:57.660380 | orchestrator | 2026-02-18 05:28:57.660384 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-18 05:28:57.660389 | orchestrator | Wednesday 18 February 2026 05:28:37 +0000 (0:00:02.068) 0:08:44.365 **** 2026-02-18 05:28:57.660393 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:28:57.660398 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:28:57.660406 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:28:57.660411 | orchestrator | 2026-02-18 05:28:57.660415 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-18 05:28:57.660419 | orchestrator | Wednesday 18 February 2026 05:28:40 +0000 (0:00:02.284) 0:08:46.649 **** 2026-02-18 05:28:57.660424 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:28:57.660428 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:28:57.660432 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:28:57.660436 | orchestrator | 2026-02-18 05:28:57.660441 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-18 05:28:57.660446 | orchestrator | Wednesday 18 February 2026 05:28:43 +0000 (0:00:03.117) 0:08:49.766 **** 2026-02-18 05:28:57.660450 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:57.660454 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:57.660459 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:57.660463 | orchestrator | 2026-02-18 05:28:57.660468 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-18 05:28:57.660472 | orchestrator | Wednesday 18 February 2026 05:28:44 +0000 (0:00:01.397) 0:08:51.164 **** 2026-02-18 05:28:57.660476 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:57.660481 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:57.660485 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:57.660489 | orchestrator | 2026-02-18 05:28:57.660494 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-18 05:28:57.660498 | orchestrator | Wednesday 18 February 2026 05:28:46 +0000 (0:00:01.441) 0:08:52.605 **** 2026-02-18 05:28:57.660502 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:57.660510 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:57.660514 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:57.660518 | orchestrator | 2026-02-18 05:28:57.660523 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-18 05:28:57.660527 | orchestrator | Wednesday 18 February 2026 05:28:47 +0000 (0:00:01.816) 0:08:54.422 **** 2026-02-18 05:28:57.660531 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:57.660536 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:57.660540 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:57.660544 | orchestrator | 2026-02-18 05:28:57.660549 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-18 05:28:57.660553 | orchestrator | Wednesday 18 February 2026 05:28:49 +0000 (0:00:01.421) 0:08:55.844 **** 2026-02-18 05:28:57.660557 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:28:57.660562 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:28:57.660566 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:28:57.660570 | orchestrator | 2026-02-18 05:28:57.660574 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-18 05:28:57.660579 | orchestrator | Wednesday 18 February 2026 05:28:50 +0000 (0:00:01.417) 0:08:57.261 **** 2026-02-18 05:28:57.660583 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:28:57.660588 | orchestrator | 2026-02-18 05:28:57.660593 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-18 05:28:57.660597 | orchestrator | Wednesday 18 February 2026 05:28:53 +0000 (0:00:02.706) 0:08:59.968 **** 2026-02-18 05:28:57.660625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-18 05:29:01.807281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-18 05:29:01.807394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-18 05:29:01.807410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:29:01.807439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:29:01.807452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-18 05:29:01.807464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:29:01.807509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:29:01.807559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-18 05:29:01.807582 | orchestrator | 2026-02-18 05:29:01.807644 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-18 05:29:01.807658 | orchestrator | Wednesday 18 February 2026 05:28:57 +0000 (0:00:04.116) 0:09:04.084 **** 2026-02-18 05:29:01.807670 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:29:01.807683 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:29:01.807694 | orchestrator | } 2026-02-18 05:29:01.807706 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:29:01.807716 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:29:01.807727 | orchestrator | } 2026-02-18 05:29:01.807738 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:29:01.807748 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:29:01.807759 | orchestrator | } 2026-02-18 05:29:01.807770 | orchestrator | 2026-02-18 05:29:01.807781 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:29:01.807792 | orchestrator | Wednesday 18 February 2026 05:28:59 +0000 (0:00:01.635) 0:09:05.720 **** 2026-02-18 05:29:01.807803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-18 05:29:01.807822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:29:01.807834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:29:01.807845 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:29:01.807857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-18 05:29:01.807887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:31:01.196823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:31:01.196946 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:01.196964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-18 05:31:01.196978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-18 05:31:01.197008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-18 05:31:01.197020 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:01.197031 | orchestrator | 2026-02-18 05:31:01.197044 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-18 05:31:01.197077 | orchestrator | Wednesday 18 February 2026 05:29:01 +0000 (0:00:02.506) 0:09:08.227 **** 2026-02-18 05:31:01.197089 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:01.197101 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:01.197112 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:01.197123 | orchestrator | 2026-02-18 05:31:01.197134 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-18 05:31:01.197145 | orchestrator | Wednesday 18 February 2026 05:29:03 +0000 (0:00:01.915) 0:09:10.142 **** 2026-02-18 05:31:01.197156 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:01.197167 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:01.197177 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:01.197188 | orchestrator | 2026-02-18 05:31:01.197199 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-18 05:31:01.197210 | orchestrator | Wednesday 18 February 2026 05:29:05 +0000 (0:00:01.426) 0:09:11.569 **** 2026-02-18 05:31:01.197221 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:01.197232 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:31:01.197243 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:31:01.197254 | orchestrator | 2026-02-18 05:31:01.197264 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-18 05:31:01.197275 | orchestrator | Wednesday 18 February 2026 05:29:12 +0000 (0:00:07.072) 0:09:18.641 **** 2026-02-18 05:31:01.197286 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:01.197297 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:31:01.197308 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:31:01.197319 | orchestrator | 2026-02-18 05:31:01.197330 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-18 05:31:01.197340 | orchestrator | Wednesday 18 February 2026 05:29:19 +0000 (0:00:07.465) 0:09:26.107 **** 2026-02-18 05:31:01.197353 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:01.197365 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:31:01.197377 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:31:01.197390 | orchestrator | 2026-02-18 05:31:01.197403 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-18 05:31:01.197471 | orchestrator | Wednesday 18 February 2026 05:29:26 +0000 (0:00:07.102) 0:09:33.210 **** 2026-02-18 05:31:01.197483 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:01.197496 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:31:01.197509 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:31:01.197522 | orchestrator | 2026-02-18 05:31:01.197553 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-18 05:31:01.197566 | orchestrator | Wednesday 18 February 2026 05:29:34 +0000 (0:00:07.592) 0:09:40.802 **** 2026-02-18 05:31:01.197578 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:01.197591 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:01.197602 | orchestrator | 2026-02-18 05:31:01.197615 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-18 05:31:01.197627 | orchestrator | Wednesday 18 February 2026 05:29:38 +0000 (0:00:03.680) 0:09:44.483 **** 2026-02-18 05:31:01.197640 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:01.197652 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:31:01.197664 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:31:01.197676 | orchestrator | 2026-02-18 05:31:01.197689 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-18 05:31:01.197702 | orchestrator | Wednesday 18 February 2026 05:29:51 +0000 (0:00:13.239) 0:09:57.722 **** 2026-02-18 05:31:01.197714 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:01.197725 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:01.197736 | orchestrator | 2026-02-18 05:31:01.197746 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-18 05:31:01.197757 | orchestrator | Wednesday 18 February 2026 05:29:54 +0000 (0:00:03.696) 0:10:01.418 **** 2026-02-18 05:31:01.197768 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:01.197779 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:31:01.197799 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:31:01.197810 | orchestrator | 2026-02-18 05:31:01.197821 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-18 05:31:01.197831 | orchestrator | Wednesday 18 February 2026 05:30:02 +0000 (0:00:07.329) 0:10:08.748 **** 2026-02-18 05:31:01.197842 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:01.197853 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:01.197864 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:31:01.197874 | orchestrator | 2026-02-18 05:31:01.197885 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-18 05:31:01.197896 | orchestrator | Wednesday 18 February 2026 05:30:09 +0000 (0:00:06.824) 0:10:15.572 **** 2026-02-18 05:31:01.197907 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:01.197917 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:01.197928 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:31:01.197939 | orchestrator | 2026-02-18 05:31:01.197950 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-18 05:31:01.197960 | orchestrator | Wednesday 18 February 2026 05:30:15 +0000 (0:00:06.815) 0:10:22.388 **** 2026-02-18 05:31:01.197971 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:01.197982 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:01.197992 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:31:01.198003 | orchestrator | 2026-02-18 05:31:01.198076 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-18 05:31:01.198097 | orchestrator | Wednesday 18 February 2026 05:30:22 +0000 (0:00:06.805) 0:10:29.193 **** 2026-02-18 05:31:01.198109 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:01.198120 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:01.198131 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:31:01.198141 | orchestrator | 2026-02-18 05:31:01.198152 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-18 05:31:01.198163 | orchestrator | Wednesday 18 February 2026 05:30:29 +0000 (0:00:07.212) 0:10:36.406 **** 2026-02-18 05:31:01.198174 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:01.198185 | orchestrator | 2026-02-18 05:31:01.198196 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-18 05:31:01.198206 | orchestrator | Wednesday 18 February 2026 05:30:33 +0000 (0:00:03.631) 0:10:40.038 **** 2026-02-18 05:31:01.198217 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:01.198228 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:01.198239 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:31:01.198250 | orchestrator | 2026-02-18 05:31:01.198261 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-18 05:31:01.198271 | orchestrator | Wednesday 18 February 2026 05:30:45 +0000 (0:00:12.126) 0:10:52.165 **** 2026-02-18 05:31:01.198282 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:01.198293 | orchestrator | 2026-02-18 05:31:01.198304 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-18 05:31:01.198315 | orchestrator | Wednesday 18 February 2026 05:30:49 +0000 (0:00:03.670) 0:10:55.835 **** 2026-02-18 05:31:01.198325 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:01.198336 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:01.198347 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:31:01.198358 | orchestrator | 2026-02-18 05:31:01.198369 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-18 05:31:01.198380 | orchestrator | Wednesday 18 February 2026 05:30:56 +0000 (0:00:06.791) 0:11:02.626 **** 2026-02-18 05:31:01.198390 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:01.198401 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:01.198412 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:01.198423 | orchestrator | 2026-02-18 05:31:01.198477 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-18 05:31:01.198489 | orchestrator | Wednesday 18 February 2026 05:30:58 +0000 (0:00:02.040) 0:11:04.667 **** 2026-02-18 05:31:01.198508 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:01.198519 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:01.198530 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:01.198540 | orchestrator | 2026-02-18 05:31:01.198551 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:31:01.198563 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-18 05:31:01.198576 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-18 05:31:01.198596 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-18 05:31:02.138554 | orchestrator | 2026-02-18 05:31:02.138676 | orchestrator | 2026-02-18 05:31:02.138692 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:31:02.138706 | orchestrator | Wednesday 18 February 2026 05:31:01 +0000 (0:00:02.951) 0:11:07.619 **** 2026-02-18 05:31:02.138718 | orchestrator | =============================================================================== 2026-02-18 05:31:02.138729 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.24s 2026-02-18 05:31:02.138740 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.13s 2026-02-18 05:31:02.138751 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.08s 2026-02-18 05:31:02.138762 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.59s 2026-02-18 05:31:02.138773 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.47s 2026-02-18 05:31:02.138784 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.33s 2026-02-18 05:31:02.138795 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.21s 2026-02-18 05:31:02.138805 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.10s 2026-02-18 05:31:02.138816 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.07s 2026-02-18 05:31:02.138827 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.99s 2026-02-18 05:31:02.138837 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.96s 2026-02-18 05:31:02.138848 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.82s 2026-02-18 05:31:02.138859 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.82s 2026-02-18 05:31:02.138870 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.81s 2026-02-18 05:31:02.138881 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.79s 2026-02-18 05:31:02.138891 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.45s 2026-02-18 05:31:02.138902 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.98s 2026-02-18 05:31:02.138912 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.83s 2026-02-18 05:31:02.138923 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.54s 2026-02-18 05:31:02.138934 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.06s 2026-02-18 05:31:02.461620 | orchestrator | + osism apply -a upgrade opensearch 2026-02-18 05:31:04.690569 | orchestrator | 2026-02-18 05:31:04 | INFO  | Task f42ca724-f77f-4c49-8995-ed5501e93574 (opensearch) was prepared for execution. 2026-02-18 05:31:04.690648 | orchestrator | 2026-02-18 05:31:04 | INFO  | It takes a moment until task f42ca724-f77f-4c49-8995-ed5501e93574 (opensearch) has been started and output is visible here. 2026-02-18 05:31:24.379892 | orchestrator | 2026-02-18 05:31:24.380013 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:31:24.380030 | orchestrator | 2026-02-18 05:31:24.380066 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:31:24.380078 | orchestrator | Wednesday 18 February 2026 05:31:10 +0000 (0:00:01.811) 0:00:01.811 **** 2026-02-18 05:31:24.380090 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:24.380102 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:24.380113 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:24.380124 | orchestrator | 2026-02-18 05:31:24.380135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:31:24.380146 | orchestrator | Wednesday 18 February 2026 05:31:12 +0000 (0:00:01.691) 0:00:03.503 **** 2026-02-18 05:31:24.380158 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-18 05:31:24.380169 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-18 05:31:24.380180 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-18 05:31:24.380191 | orchestrator | 2026-02-18 05:31:24.380202 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-18 05:31:24.380213 | orchestrator | 2026-02-18 05:31:24.380224 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 05:31:24.380235 | orchestrator | Wednesday 18 February 2026 05:31:14 +0000 (0:00:02.227) 0:00:05.730 **** 2026-02-18 05:31:24.380247 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:31:24.380259 | orchestrator | 2026-02-18 05:31:24.380270 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-18 05:31:24.380281 | orchestrator | Wednesday 18 February 2026 05:31:17 +0000 (0:00:02.340) 0:00:08.071 **** 2026-02-18 05:31:24.380292 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 05:31:24.380303 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 05:31:24.380314 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-18 05:31:24.380325 | orchestrator | 2026-02-18 05:31:24.380336 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-18 05:31:24.380347 | orchestrator | Wednesday 18 February 2026 05:31:20 +0000 (0:00:03.037) 0:00:11.108 **** 2026-02-18 05:31:24.380361 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:24.380376 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:24.380465 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:24.380484 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:24.380500 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:24.380520 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:24.380541 | orchestrator | 2026-02-18 05:31:24.380554 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 05:31:24.380567 | orchestrator | Wednesday 18 February 2026 05:31:22 +0000 (0:00:02.453) 0:00:13.562 **** 2026-02-18 05:31:24.380580 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:31:24.380592 | orchestrator | 2026-02-18 05:31:24.380613 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-18 05:31:29.813942 | orchestrator | Wednesday 18 February 2026 05:31:24 +0000 (0:00:01.800) 0:00:15.363 **** 2026-02-18 05:31:29.814136 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:29.814159 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:29.814173 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:29.814212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:29.814271 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:29.814287 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:29.814299 | orchestrator | 2026-02-18 05:31:29.814313 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-18 05:31:29.814325 | orchestrator | Wednesday 18 February 2026 05:31:28 +0000 (0:00:03.637) 0:00:19.000 **** 2026-02-18 05:31:29.814336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:31:29.814372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:31:31.659069 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:31.659152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:31:31.659166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:31:31.659175 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:31.659202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:31:31.659235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:31:31.659244 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:31.659251 | orchestrator | 2026-02-18 05:31:31.659259 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-18 05:31:31.659268 | orchestrator | Wednesday 18 February 2026 05:31:29 +0000 (0:00:01.806) 0:00:20.807 **** 2026-02-18 05:31:31.659275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:31:31.659282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:31:31.659295 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:31:31.659306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:31:31.659320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:31:35.418806 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:31:35.418955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:31:35.418976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:31:35.419016 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:31:35.419028 | orchestrator | 2026-02-18 05:31:35.419040 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-18 05:31:35.419051 | orchestrator | Wednesday 18 February 2026 05:31:31 +0000 (0:00:01.839) 0:00:22.646 **** 2026-02-18 05:31:35.419078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:35.419107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:35.419119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:35.419129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:35.419154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:35.419174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:49.351077 | orchestrator | 2026-02-18 05:31:49.351203 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-18 05:31:49.351219 | orchestrator | Wednesday 18 February 2026 05:31:35 +0000 (0:00:03.760) 0:00:26.407 **** 2026-02-18 05:31:49.351231 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:49.351243 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:49.351254 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:49.351265 | orchestrator | 2026-02-18 05:31:49.351276 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-18 05:31:49.351311 | orchestrator | Wednesday 18 February 2026 05:31:38 +0000 (0:00:03.435) 0:00:29.843 **** 2026-02-18 05:31:49.351323 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:31:49.351334 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:31:49.351344 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:31:49.351355 | orchestrator | 2026-02-18 05:31:49.351366 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-18 05:31:49.351377 | orchestrator | Wednesday 18 February 2026 05:31:42 +0000 (0:00:03.226) 0:00:33.069 **** 2026-02-18 05:31:49.351437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:49.351468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:49.351480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-18 05:31:49.351511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:49.351534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:49.351553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-18 05:31:49.351565 | orchestrator | 2026-02-18 05:31:49.351577 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-18 05:31:49.351590 | orchestrator | Wednesday 18 February 2026 05:31:45 +0000 (0:00:03.736) 0:00:36.805 **** 2026-02-18 05:31:49.351603 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:31:49.351617 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:31:49.351631 | orchestrator | } 2026-02-18 05:31:49.351644 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:31:49.351656 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:31:49.351668 | orchestrator | } 2026-02-18 05:31:49.351680 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:31:49.351692 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:31:49.351705 | orchestrator | } 2026-02-18 05:31:49.351717 | orchestrator | 2026-02-18 05:31:49.351730 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:31:49.351742 | orchestrator | Wednesday 18 February 2026 05:31:47 +0000 (0:00:01.433) 0:00:38.239 **** 2026-02-18 05:31:49.351764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:34:52.016386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:34:52.016510 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:34:52.016546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:34:52.016562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:34:52.016599 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:34:52.016629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-18 05:34:52.016642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-18 05:34:52.016654 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:34:52.016666 | orchestrator | 2026-02-18 05:34:52.016678 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 05:34:52.016691 | orchestrator | Wednesday 18 February 2026 05:31:49 +0000 (0:00:02.103) 0:00:40.342 **** 2026-02-18 05:34:52.016702 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:34:52.016713 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:34:52.016723 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:34:52.016734 | orchestrator | 2026-02-18 05:34:52.016745 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-18 05:34:52.016756 | orchestrator | Wednesday 18 February 2026 05:31:50 +0000 (0:00:01.566) 0:00:41.909 **** 2026-02-18 05:34:52.016767 | orchestrator | 2026-02-18 05:34:52.016778 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-18 05:34:52.016788 | orchestrator | Wednesday 18 February 2026 05:31:51 +0000 (0:00:00.491) 0:00:42.400 **** 2026-02-18 05:34:52.016799 | orchestrator | 2026-02-18 05:34:52.016815 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-18 05:34:52.016827 | orchestrator | Wednesday 18 February 2026 05:31:51 +0000 (0:00:00.464) 0:00:42.864 **** 2026-02-18 05:34:52.016837 | orchestrator | 2026-02-18 05:34:52.016848 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-18 05:34:52.016859 | orchestrator | Wednesday 18 February 2026 05:31:52 +0000 (0:00:00.803) 0:00:43.667 **** 2026-02-18 05:34:52.016872 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:34:52.016886 | orchestrator | 2026-02-18 05:34:52.016898 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-18 05:34:52.016923 | orchestrator | Wednesday 18 February 2026 05:31:56 +0000 (0:00:03.418) 0:00:47.085 **** 2026-02-18 05:34:52.016936 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:34:52.016949 | orchestrator | 2026-02-18 05:34:52.016961 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-18 05:34:52.016974 | orchestrator | Wednesday 18 February 2026 05:32:00 +0000 (0:00:04.398) 0:00:51.484 **** 2026-02-18 05:34:52.016986 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:34:52.016999 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:34:52.017011 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:34:52.017024 | orchestrator | 2026-02-18 05:34:52.017036 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-18 05:34:52.017049 | orchestrator | Wednesday 18 February 2026 05:33:11 +0000 (0:01:10.800) 0:02:02.285 **** 2026-02-18 05:34:52.017061 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:34:52.017073 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:34:52.017086 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:34:52.017098 | orchestrator | 2026-02-18 05:34:52.017110 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-18 05:34:52.017122 | orchestrator | Wednesday 18 February 2026 05:34:42 +0000 (0:01:31.110) 0:03:33.396 **** 2026-02-18 05:34:52.017135 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:34:52.017147 | orchestrator | 2026-02-18 05:34:52.017159 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-18 05:34:52.017172 | orchestrator | Wednesday 18 February 2026 05:34:44 +0000 (0:00:01.775) 0:03:35.171 **** 2026-02-18 05:34:52.017184 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:34:52.017197 | orchestrator | 2026-02-18 05:34:52.017210 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-18 05:34:52.017222 | orchestrator | Wednesday 18 February 2026 05:34:47 +0000 (0:00:03.339) 0:03:38.511 **** 2026-02-18 05:34:52.017233 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:34:52.017326 | orchestrator | 2026-02-18 05:34:52.017353 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-18 05:34:52.017371 | orchestrator | Wednesday 18 February 2026 05:34:50 +0000 (0:00:03.250) 0:03:41.761 **** 2026-02-18 05:34:52.017390 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:34:52.017409 | orchestrator | 2026-02-18 05:34:52.017427 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-18 05:34:52.017455 | orchestrator | Wednesday 18 February 2026 05:34:52 +0000 (0:00:01.242) 0:03:43.003 **** 2026-02-18 05:34:54.348770 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:34:54.348889 | orchestrator | 2026-02-18 05:34:54.348906 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:34:54.348920 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:34:54.348933 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 05:34:54.348944 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 05:34:54.348955 | orchestrator | 2026-02-18 05:34:54.348966 | orchestrator | 2026-02-18 05:34:54.348977 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:34:54.348988 | orchestrator | Wednesday 18 February 2026 05:34:53 +0000 (0:00:01.953) 0:03:44.956 **** 2026-02-18 05:34:54.348999 | orchestrator | =============================================================================== 2026-02-18 05:34:54.349010 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 91.11s 2026-02-18 05:34:54.349021 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.80s 2026-02-18 05:34:54.349065 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.40s 2026-02-18 05:34:54.349085 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.76s 2026-02-18 05:34:54.349147 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.74s 2026-02-18 05:34:54.349165 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.64s 2026-02-18 05:34:54.349182 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.44s 2026-02-18 05:34:54.349199 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.42s 2026-02-18 05:34:54.349215 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.34s 2026-02-18 05:34:54.349234 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.25s 2026-02-18 05:34:54.349287 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.23s 2026-02-18 05:34:54.349306 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 3.04s 2026-02-18 05:34:54.349324 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.45s 2026-02-18 05:34:54.349342 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.34s 2026-02-18 05:34:54.349380 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.23s 2026-02-18 05:34:54.349399 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.10s 2026-02-18 05:34:54.349418 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.95s 2026-02-18 05:34:54.349436 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.84s 2026-02-18 05:34:54.349453 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.81s 2026-02-18 05:34:54.349471 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.80s 2026-02-18 05:34:54.688860 | orchestrator | + osism apply -a upgrade memcached 2026-02-18 05:34:56.823185 | orchestrator | 2026-02-18 05:34:56 | INFO  | Task f7c71c10-6a9e-49ac-a324-d4d04f0dd35a (memcached) was prepared for execution. 2026-02-18 05:34:56.823333 | orchestrator | 2026-02-18 05:34:56 | INFO  | It takes a moment until task f7c71c10-6a9e-49ac-a324-d4d04f0dd35a (memcached) has been started and output is visible here. 2026-02-18 05:35:31.176339 | orchestrator | 2026-02-18 05:35:31.176436 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:35:31.176449 | orchestrator | 2026-02-18 05:35:31.176458 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:35:31.176467 | orchestrator | Wednesday 18 February 2026 05:35:02 +0000 (0:00:01.370) 0:00:01.370 **** 2026-02-18 05:35:31.176475 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:35:31.176484 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:35:31.176492 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:35:31.176500 | orchestrator | 2026-02-18 05:35:31.176509 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:35:31.176517 | orchestrator | Wednesday 18 February 2026 05:35:04 +0000 (0:00:01.927) 0:00:03.297 **** 2026-02-18 05:35:31.176526 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-18 05:35:31.176534 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-18 05:35:31.176542 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-18 05:35:31.176550 | orchestrator | 2026-02-18 05:35:31.176558 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-18 05:35:31.176566 | orchestrator | 2026-02-18 05:35:31.176574 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-18 05:35:31.176582 | orchestrator | Wednesday 18 February 2026 05:35:06 +0000 (0:00:01.907) 0:00:05.205 **** 2026-02-18 05:35:31.176591 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:35:31.176600 | orchestrator | 2026-02-18 05:35:31.176627 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-18 05:35:31.176635 | orchestrator | Wednesday 18 February 2026 05:35:09 +0000 (0:00:02.995) 0:00:08.201 **** 2026-02-18 05:35:31.176643 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-18 05:35:31.176651 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-18 05:35:31.176659 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-18 05:35:31.176667 | orchestrator | 2026-02-18 05:35:31.176675 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-18 05:35:31.176683 | orchestrator | Wednesday 18 February 2026 05:35:11 +0000 (0:00:02.030) 0:00:10.231 **** 2026-02-18 05:35:31.176691 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-18 05:35:31.176699 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-18 05:35:31.176707 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-18 05:35:31.176715 | orchestrator | 2026-02-18 05:35:31.176723 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-18 05:35:31.176731 | orchestrator | Wednesday 18 February 2026 05:35:14 +0000 (0:00:02.741) 0:00:12.973 **** 2026-02-18 05:35:31.176742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 05:35:31.176765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 05:35:31.176789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-18 05:35:31.176798 | orchestrator | 2026-02-18 05:35:31.176807 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-18 05:35:31.176815 | orchestrator | Wednesday 18 February 2026 05:35:16 +0000 (0:00:02.384) 0:00:15.358 **** 2026-02-18 05:35:31.176823 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:35:31.176832 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:35:31.176840 | orchestrator | } 2026-02-18 05:35:31.176850 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:35:31.176865 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:35:31.176874 | orchestrator | } 2026-02-18 05:35:31.176883 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:35:31.176892 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:35:31.176901 | orchestrator | } 2026-02-18 05:35:31.176911 | orchestrator | 2026-02-18 05:35:31.176920 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:35:31.176931 | orchestrator | Wednesday 18 February 2026 05:35:17 +0000 (0:00:01.406) 0:00:16.765 **** 2026-02-18 05:35:31.176946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 05:35:31.176960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 05:35:31.176974 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:35:31.176987 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:35:31.177001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-18 05:35:31.177014 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:35:31.177026 | orchestrator | 2026-02-18 05:35:31.177039 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-18 05:35:31.177058 | orchestrator | Wednesday 18 February 2026 05:35:20 +0000 (0:00:02.230) 0:00:18.996 **** 2026-02-18 05:35:31.177070 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:35:31.177082 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:35:31.177094 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:35:31.177106 | orchestrator | 2026-02-18 05:35:31.177119 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:35:31.177133 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:35:31.177146 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:35:31.177170 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:35:31.177184 | orchestrator | 2026-02-18 05:35:31.177200 | orchestrator | 2026-02-18 05:35:31.177238 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:35:31.177263 | orchestrator | Wednesday 18 February 2026 05:35:31 +0000 (0:00:11.012) 0:00:30.009 **** 2026-02-18 05:35:31.499589 | orchestrator | =============================================================================== 2026-02-18 05:35:31.499661 | orchestrator | memcached : Restart memcached container -------------------------------- 11.01s 2026-02-18 05:35:31.499668 | orchestrator | memcached : include_tasks ----------------------------------------------- 3.00s 2026-02-18 05:35:31.499673 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.74s 2026-02-18 05:35:31.499678 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.38s 2026-02-18 05:35:31.499684 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.23s 2026-02-18 05:35:31.499689 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.03s 2026-02-18 05:35:31.499694 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.93s 2026-02-18 05:35:31.499698 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.91s 2026-02-18 05:35:31.499703 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.41s 2026-02-18 05:35:31.823854 | orchestrator | + osism apply -a upgrade redis 2026-02-18 05:35:33.902143 | orchestrator | 2026-02-18 05:35:33 | INFO  | Task 32c34a31-6e29-4246-ba4b-3339076c7808 (redis) was prepared for execution. 2026-02-18 05:35:33.902963 | orchestrator | 2026-02-18 05:35:33 | INFO  | It takes a moment until task 32c34a31-6e29-4246-ba4b-3339076c7808 (redis) has been started and output is visible here. 2026-02-18 05:35:53.506695 | orchestrator | 2026-02-18 05:35:53.506789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:35:53.506803 | orchestrator | 2026-02-18 05:35:53.506814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:35:53.506824 | orchestrator | Wednesday 18 February 2026 05:35:40 +0000 (0:00:01.725) 0:00:01.725 **** 2026-02-18 05:35:53.506835 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:35:53.506846 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:35:53.506856 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:35:53.506865 | orchestrator | 2026-02-18 05:35:53.506876 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:35:53.506911 | orchestrator | Wednesday 18 February 2026 05:35:42 +0000 (0:00:02.299) 0:00:04.024 **** 2026-02-18 05:35:53.506922 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-18 05:35:53.506931 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-18 05:35:53.506937 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-18 05:35:53.506944 | orchestrator | 2026-02-18 05:35:53.506950 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-18 05:35:53.506957 | orchestrator | 2026-02-18 05:35:53.506964 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-18 05:35:53.506970 | orchestrator | Wednesday 18 February 2026 05:35:45 +0000 (0:00:03.088) 0:00:07.112 **** 2026-02-18 05:35:53.506977 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:35:53.506984 | orchestrator | 2026-02-18 05:35:53.507000 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-18 05:35:53.507007 | orchestrator | Wednesday 18 February 2026 05:35:47 +0000 (0:00:02.576) 0:00:09.689 **** 2026-02-18 05:35:53.507016 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507059 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507067 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507074 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507104 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507110 | orchestrator | 2026-02-18 05:35:53.507117 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-18 05:35:53.507123 | orchestrator | Wednesday 18 February 2026 05:35:50 +0000 (0:00:02.342) 0:00:12.032 **** 2026-02-18 05:35:53.507130 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507146 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507153 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507160 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:35:53.507170 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557309 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557424 | orchestrator | 2026-02-18 05:36:00.557443 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-18 05:36:00.557479 | orchestrator | Wednesday 18 February 2026 05:35:53 +0000 (0:00:03.172) 0:00:15.204 **** 2026-02-18 05:36:00.557493 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557506 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557581 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557601 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557645 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557677 | orchestrator | 2026-02-18 05:36:00.557697 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-18 05:36:00.557715 | orchestrator | Wednesday 18 February 2026 05:35:57 +0000 (0:00:03.844) 0:00:19.049 **** 2026-02-18 05:36:00.557736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:00.557837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-18 05:36:28.558458 | orchestrator | 2026-02-18 05:36:28.558576 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-18 05:36:28.558597 | orchestrator | Wednesday 18 February 2026 05:36:00 +0000 (0:00:03.217) 0:00:22.267 **** 2026-02-18 05:36:28.558613 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:36:28.558627 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:36:28.558640 | orchestrator | } 2026-02-18 05:36:28.558652 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:36:28.558665 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:36:28.558677 | orchestrator | } 2026-02-18 05:36:28.558689 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:36:28.558700 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:36:28.558712 | orchestrator | } 2026-02-18 05:36:28.558725 | orchestrator | 2026-02-18 05:36:28.558738 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:36:28.558750 | orchestrator | Wednesday 18 February 2026 05:36:02 +0000 (0:00:01.581) 0:00:23.848 **** 2026-02-18 05:36:28.558764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-18 05:36:28.558796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-18 05:36:28.558810 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:36:28.558823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-18 05:36:28.558835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-18 05:36:28.558868 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:36:28.558880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-18 05:36:28.558912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-18 05:36:28.558924 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:36:28.558935 | orchestrator | 2026-02-18 05:36:28.558946 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-18 05:36:28.558958 | orchestrator | Wednesday 18 February 2026 05:36:04 +0000 (0:00:02.008) 0:00:25.857 **** 2026-02-18 05:36:28.558969 | orchestrator | 2026-02-18 05:36:28.558980 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-18 05:36:28.558991 | orchestrator | Wednesday 18 February 2026 05:36:04 +0000 (0:00:00.466) 0:00:26.324 **** 2026-02-18 05:36:28.559002 | orchestrator | 2026-02-18 05:36:28.559012 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-18 05:36:28.559023 | orchestrator | Wednesday 18 February 2026 05:36:05 +0000 (0:00:00.485) 0:00:26.809 **** 2026-02-18 05:36:28.559034 | orchestrator | 2026-02-18 05:36:28.559045 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-18 05:36:28.559056 | orchestrator | Wednesday 18 February 2026 05:36:05 +0000 (0:00:00.808) 0:00:27.617 **** 2026-02-18 05:36:28.559067 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:36:28.559078 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:36:28.559089 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:36:28.559100 | orchestrator | 2026-02-18 05:36:28.559111 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-18 05:36:28.559127 | orchestrator | Wednesday 18 February 2026 05:36:16 +0000 (0:00:11.037) 0:00:38.655 **** 2026-02-18 05:36:28.559138 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:36:28.559149 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:36:28.559160 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:36:28.559203 | orchestrator | 2026-02-18 05:36:28.559215 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:36:28.559227 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:36:28.559240 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:36:28.559251 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:36:28.559262 | orchestrator | 2026-02-18 05:36:28.559273 | orchestrator | 2026-02-18 05:36:28.559284 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:36:28.559295 | orchestrator | Wednesday 18 February 2026 05:36:28 +0000 (0:00:11.175) 0:00:49.830 **** 2026-02-18 05:36:28.559315 | orchestrator | =============================================================================== 2026-02-18 05:36:28.559325 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.18s 2026-02-18 05:36:28.559336 | orchestrator | redis : Restart redis container ---------------------------------------- 11.04s 2026-02-18 05:36:28.559347 | orchestrator | redis : Copying over redis config files --------------------------------- 3.84s 2026-02-18 05:36:28.559358 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.22s 2026-02-18 05:36:28.559368 | orchestrator | redis : Copying over default config.json files -------------------------- 3.17s 2026-02-18 05:36:28.559379 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.09s 2026-02-18 05:36:28.559390 | orchestrator | redis : include_tasks --------------------------------------------------- 2.58s 2026-02-18 05:36:28.559401 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.34s 2026-02-18 05:36:28.559412 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.30s 2026-02-18 05:36:28.559422 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.01s 2026-02-18 05:36:28.559433 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.76s 2026-02-18 05:36:28.559444 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.58s 2026-02-18 05:36:28.934740 | orchestrator | + osism apply -a upgrade mariadb 2026-02-18 05:36:31.024458 | orchestrator | 2026-02-18 05:36:31 | INFO  | Task 4481cb22-7e42-4ad8-853a-19b32e9b16df (mariadb) was prepared for execution. 2026-02-18 05:36:31.024561 | orchestrator | 2026-02-18 05:36:31 | INFO  | It takes a moment until task 4481cb22-7e42-4ad8-853a-19b32e9b16df (mariadb) has been started and output is visible here. 2026-02-18 05:36:56.543875 | orchestrator | 2026-02-18 05:36:56.543992 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:36:56.544010 | orchestrator | 2026-02-18 05:36:56.544023 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:36:56.544034 | orchestrator | Wednesday 18 February 2026 05:36:36 +0000 (0:00:01.397) 0:00:01.397 **** 2026-02-18 05:36:56.544046 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:36:56.544058 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:36:56.544070 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:36:56.544080 | orchestrator | 2026-02-18 05:36:56.544092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:36:56.544103 | orchestrator | Wednesday 18 February 2026 05:36:38 +0000 (0:00:01.881) 0:00:03.279 **** 2026-02-18 05:36:56.544114 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-18 05:36:56.544125 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-18 05:36:56.544136 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-18 05:36:56.544209 | orchestrator | 2026-02-18 05:36:56.544222 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-18 05:36:56.544233 | orchestrator | 2026-02-18 05:36:56.544244 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-18 05:36:56.544255 | orchestrator | Wednesday 18 February 2026 05:36:40 +0000 (0:00:01.873) 0:00:05.152 **** 2026-02-18 05:36:56.544267 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:36:56.544278 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 05:36:56.544289 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 05:36:56.544300 | orchestrator | 2026-02-18 05:36:56.544311 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 05:36:56.544322 | orchestrator | Wednesday 18 February 2026 05:36:42 +0000 (0:00:01.550) 0:00:06.702 **** 2026-02-18 05:36:56.544334 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:36:56.544355 | orchestrator | 2026-02-18 05:36:56.544374 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-18 05:36:56.544426 | orchestrator | Wednesday 18 February 2026 05:36:43 +0000 (0:00:01.723) 0:00:08.426 **** 2026-02-18 05:36:56.544471 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:36:56.544527 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:36:56.544560 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:36:56.544594 | orchestrator | 2026-02-18 05:36:56.544614 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-18 05:36:56.544632 | orchestrator | Wednesday 18 February 2026 05:36:48 +0000 (0:00:04.201) 0:00:12.628 **** 2026-02-18 05:36:56.544650 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:36:56.544670 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:36:56.544687 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:36:56.544705 | orchestrator | 2026-02-18 05:36:56.544723 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-18 05:36:56.544742 | orchestrator | Wednesday 18 February 2026 05:36:49 +0000 (0:00:01.679) 0:00:14.307 **** 2026-02-18 05:36:56.544761 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:36:56.544779 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:36:56.544797 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:36:56.544814 | orchestrator | 2026-02-18 05:36:56.544832 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-18 05:36:56.544850 | orchestrator | Wednesday 18 February 2026 05:36:52 +0000 (0:00:02.214) 0:00:16.522 **** 2026-02-18 05:36:56.544883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:37:09.139421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:37:09.139533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:37:09.139568 | orchestrator | 2026-02-18 05:37:09.139582 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-18 05:37:09.139593 | orchestrator | Wednesday 18 February 2026 05:36:56 +0000 (0:00:04.513) 0:00:21.036 **** 2026-02-18 05:37:09.139603 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:09.139614 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:09.139624 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:37:09.139634 | orchestrator | 2026-02-18 05:37:09.139644 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-18 05:37:09.139672 | orchestrator | Wednesday 18 February 2026 05:36:58 +0000 (0:00:02.132) 0:00:23.169 **** 2026-02-18 05:37:09.139682 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:37:09.139692 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:37:09.139701 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:37:09.139711 | orchestrator | 2026-02-18 05:37:09.139721 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 05:37:09.139731 | orchestrator | Wednesday 18 February 2026 05:37:03 +0000 (0:00:04.986) 0:00:28.155 **** 2026-02-18 05:37:09.139741 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:37:09.139751 | orchestrator | 2026-02-18 05:37:09.139766 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-18 05:37:09.139776 | orchestrator | Wednesday 18 February 2026 05:37:05 +0000 (0:00:02.005) 0:00:30.160 **** 2026-02-18 05:37:09.139786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:09.139797 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:09.139815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:16.968892 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:16.969013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:16.969036 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:16.969049 | orchestrator | 2026-02-18 05:37:16.969061 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-18 05:37:16.969073 | orchestrator | Wednesday 18 February 2026 05:37:09 +0000 (0:00:03.469) 0:00:33.630 **** 2026-02-18 05:37:16.969086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:16.969119 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:16.969216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:16.969232 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:16.969244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:16.969265 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:16.969276 | orchestrator | 2026-02-18 05:37:16.969287 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-18 05:37:16.969298 | orchestrator | Wednesday 18 February 2026 05:37:12 +0000 (0:00:03.553) 0:00:37.184 **** 2026-02-18 05:37:16.969324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:21.406740 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:21.406868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:21.406911 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:21.406939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:21.406951 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:21.406961 | orchestrator | 2026-02-18 05:37:21.406972 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-18 05:37:21.406983 | orchestrator | Wednesday 18 February 2026 05:37:16 +0000 (0:00:04.279) 0:00:41.463 **** 2026-02-18 05:37:21.407013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:37:21.407038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:37:21.407060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-18 05:37:37.328944 | orchestrator | 2026-02-18 05:37:37.329054 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-18 05:37:37.329068 | orchestrator | Wednesday 18 February 2026 05:37:21 +0000 (0:00:04.439) 0:00:45.902 **** 2026-02-18 05:37:37.329080 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:37:37.329091 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:37:37.329101 | orchestrator | } 2026-02-18 05:37:37.329112 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:37:37.329174 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:37:37.329185 | orchestrator | } 2026-02-18 05:37:37.329195 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:37:37.329205 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:37:37.329215 | orchestrator | } 2026-02-18 05:37:37.329225 | orchestrator | 2026-02-18 05:37:37.329235 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:37:37.329245 | orchestrator | Wednesday 18 February 2026 05:37:22 +0000 (0:00:01.463) 0:00:47.366 **** 2026-02-18 05:37:37.329275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:37.329289 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:37.329349 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:37.329365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:37.329376 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:37.329386 | orchestrator | 2026-02-18 05:37:37.329396 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-18 05:37:37.329406 | orchestrator | Wednesday 18 February 2026 05:37:26 +0000 (0:00:04.072) 0:00:51.439 **** 2026-02-18 05:37:37.329416 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329432 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:37.329442 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:37.329452 | orchestrator | 2026-02-18 05:37:37.329461 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-18 05:37:37.329471 | orchestrator | Wednesday 18 February 2026 05:37:28 +0000 (0:00:01.808) 0:00:53.248 **** 2026-02-18 05:37:37.329481 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329490 | orchestrator | 2026-02-18 05:37:37.329500 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-18 05:37:37.329510 | orchestrator | Wednesday 18 February 2026 05:37:29 +0000 (0:00:01.171) 0:00:54.419 **** 2026-02-18 05:37:37.329519 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329529 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:37.329539 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:37.329548 | orchestrator | 2026-02-18 05:37:37.329558 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-18 05:37:37.329568 | orchestrator | Wednesday 18 February 2026 05:37:31 +0000 (0:00:01.424) 0:00:55.843 **** 2026-02-18 05:37:37.329577 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329587 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:37.329597 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:37.329607 | orchestrator | 2026-02-18 05:37:37.329617 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-18 05:37:37.329627 | orchestrator | Wednesday 18 February 2026 05:37:32 +0000 (0:00:01.608) 0:00:57.452 **** 2026-02-18 05:37:37.329636 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329646 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:37.329656 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:37.329665 | orchestrator | 2026-02-18 05:37:37.329675 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-18 05:37:37.329685 | orchestrator | Wednesday 18 February 2026 05:37:34 +0000 (0:00:01.542) 0:00:58.995 **** 2026-02-18 05:37:37.329694 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329704 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:37.329714 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:37.329723 | orchestrator | 2026-02-18 05:37:37.329733 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-18 05:37:37.329743 | orchestrator | Wednesday 18 February 2026 05:37:35 +0000 (0:00:01.433) 0:01:00.429 **** 2026-02-18 05:37:37.329753 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:37.329762 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:37.329772 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:37.329782 | orchestrator | 2026-02-18 05:37:37.329798 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-18 05:37:55.540857 | orchestrator | Wednesday 18 February 2026 05:37:37 +0000 (0:00:01.390) 0:01:01.819 **** 2026-02-18 05:37:55.540970 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.540987 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.540999 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541011 | orchestrator | 2026-02-18 05:37:55.541023 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-18 05:37:55.541034 | orchestrator | Wednesday 18 February 2026 05:37:39 +0000 (0:00:01.765) 0:01:03.584 **** 2026-02-18 05:37:55.541045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 05:37:55.541057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 05:37:55.541068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 05:37:55.541079 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541090 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 05:37:55.541101 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 05:37:55.541155 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 05:37:55.541167 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541206 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 05:37:55.541218 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 05:37:55.541228 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 05:37:55.541239 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541250 | orchestrator | 2026-02-18 05:37:55.541261 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-18 05:37:55.541272 | orchestrator | Wednesday 18 February 2026 05:37:40 +0000 (0:00:01.427) 0:01:05.012 **** 2026-02-18 05:37:55.541283 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541294 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541305 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541315 | orchestrator | 2026-02-18 05:37:55.541326 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-18 05:37:55.541352 | orchestrator | Wednesday 18 February 2026 05:37:42 +0000 (0:00:01.587) 0:01:06.599 **** 2026-02-18 05:37:55.541364 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541375 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541386 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541397 | orchestrator | 2026-02-18 05:37:55.541408 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-18 05:37:55.541420 | orchestrator | Wednesday 18 February 2026 05:37:43 +0000 (0:00:01.358) 0:01:07.958 **** 2026-02-18 05:37:55.541431 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541442 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541453 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541463 | orchestrator | 2026-02-18 05:37:55.541474 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-18 05:37:55.541486 | orchestrator | Wednesday 18 February 2026 05:37:44 +0000 (0:00:01.398) 0:01:09.356 **** 2026-02-18 05:37:55.541497 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541508 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541519 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541530 | orchestrator | 2026-02-18 05:37:55.541541 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-18 05:37:55.541551 | orchestrator | Wednesday 18 February 2026 05:37:46 +0000 (0:00:01.361) 0:01:10.718 **** 2026-02-18 05:37:55.541562 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541573 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541584 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541595 | orchestrator | 2026-02-18 05:37:55.541606 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-18 05:37:55.541616 | orchestrator | Wednesday 18 February 2026 05:37:47 +0000 (0:00:01.448) 0:01:12.167 **** 2026-02-18 05:37:55.541627 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541638 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541649 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541660 | orchestrator | 2026-02-18 05:37:55.541671 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-18 05:37:55.541682 | orchestrator | Wednesday 18 February 2026 05:37:49 +0000 (0:00:01.585) 0:01:13.752 **** 2026-02-18 05:37:55.541693 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541704 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541715 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541725 | orchestrator | 2026-02-18 05:37:55.541736 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-18 05:37:55.541747 | orchestrator | Wednesday 18 February 2026 05:37:50 +0000 (0:00:01.437) 0:01:15.190 **** 2026-02-18 05:37:55.541759 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541770 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541781 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:37:55.541791 | orchestrator | 2026-02-18 05:37:55.541802 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-18 05:37:55.541821 | orchestrator | Wednesday 18 February 2026 05:37:52 +0000 (0:00:01.328) 0:01:16.519 **** 2026-02-18 05:37:55.541857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:55.541872 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:37:55.541890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:37:55.541903 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:37:55.541923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:38:13.171000 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171160 | orchestrator | 2026-02-18 05:38:13.171180 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-18 05:38:13.171194 | orchestrator | Wednesday 18 February 2026 05:37:55 +0000 (0:00:03.509) 0:01:20.029 **** 2026-02-18 05:38:13.171206 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:38:13.171217 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:38:13.171228 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171239 | orchestrator | 2026-02-18 05:38:13.171251 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-18 05:38:13.171262 | orchestrator | Wednesday 18 February 2026 05:37:57 +0000 (0:00:01.653) 0:01:21.683 **** 2026-02-18 05:38:13.171294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:38:13.171330 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:38:13.171361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:38:13.171375 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-18 05:38:13.171413 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:38:13.171424 | orchestrator | 2026-02-18 05:38:13.171435 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-18 05:38:13.171446 | orchestrator | Wednesday 18 February 2026 05:38:00 +0000 (0:00:03.706) 0:01:25.389 **** 2026-02-18 05:38:13.171457 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:38:13.171468 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:38:13.171478 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171489 | orchestrator | 2026-02-18 05:38:13.171500 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-18 05:38:13.171511 | orchestrator | Wednesday 18 February 2026 05:38:02 +0000 (0:00:01.755) 0:01:27.144 **** 2026-02-18 05:38:13.171522 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:38:13.171533 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:38:13.171545 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171557 | orchestrator | 2026-02-18 05:38:13.171569 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-18 05:38:13.171582 | orchestrator | Wednesday 18 February 2026 05:38:04 +0000 (0:00:01.406) 0:01:28.551 **** 2026-02-18 05:38:13.171600 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:38:13.171619 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:38:13.171637 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171656 | orchestrator | 2026-02-18 05:38:13.171675 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-18 05:38:13.171694 | orchestrator | Wednesday 18 February 2026 05:38:05 +0000 (0:00:01.571) 0:01:30.123 **** 2026-02-18 05:38:13.171713 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:38:13.171731 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:38:13.171744 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171755 | orchestrator | 2026-02-18 05:38:13.171766 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-18 05:38:13.171777 | orchestrator | Wednesday 18 February 2026 05:38:07 +0000 (0:00:01.781) 0:01:31.905 **** 2026-02-18 05:38:13.171787 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:38:13.171798 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:38:13.171809 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:38:13.171819 | orchestrator | 2026-02-18 05:38:13.171830 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-18 05:38:13.171840 | orchestrator | Wednesday 18 February 2026 05:38:09 +0000 (0:00:01.976) 0:01:33.881 **** 2026-02-18 05:38:13.171851 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:38:13.171863 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:38:13.171873 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:38:13.171884 | orchestrator | 2026-02-18 05:38:13.171894 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-18 05:38:13.171905 | orchestrator | Wednesday 18 February 2026 05:38:11 +0000 (0:00:01.961) 0:01:35.843 **** 2026-02-18 05:38:13.171915 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:38:13.171926 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:38:13.171937 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:38:13.171947 | orchestrator | 2026-02-18 05:38:13.171958 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-18 05:38:13.171968 | orchestrator | Wednesday 18 February 2026 05:38:12 +0000 (0:00:01.568) 0:01:37.411 **** 2026-02-18 05:38:13.171988 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.592181 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.592295 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.592311 | orchestrator | 2026-02-18 05:40:52.592324 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-18 05:40:52.592338 | orchestrator | Wednesday 18 February 2026 05:38:14 +0000 (0:00:01.409) 0:01:38.821 **** 2026-02-18 05:40:52.592350 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.592361 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.592372 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.592409 | orchestrator | 2026-02-18 05:40:52.592421 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-18 05:40:52.592432 | orchestrator | Wednesday 18 February 2026 05:38:16 +0000 (0:00:02.151) 0:01:40.973 **** 2026-02-18 05:40:52.592444 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.592455 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.592465 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.592476 | orchestrator | 2026-02-18 05:40:52.592501 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-18 05:40:52.592512 | orchestrator | Wednesday 18 February 2026 05:38:17 +0000 (0:00:01.389) 0:01:42.362 **** 2026-02-18 05:40:52.592523 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:40:52.592535 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.592546 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.592557 | orchestrator | 2026-02-18 05:40:52.592568 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-18 05:40:52.592578 | orchestrator | Wednesday 18 February 2026 05:38:19 +0000 (0:00:01.432) 0:01:43.794 **** 2026-02-18 05:40:52.592589 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.592600 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.592611 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.592621 | orchestrator | 2026-02-18 05:40:52.592632 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-18 05:40:52.592643 | orchestrator | Wednesday 18 February 2026 05:38:22 +0000 (0:00:03.674) 0:01:47.469 **** 2026-02-18 05:40:52.592655 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.592667 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.592680 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.592691 | orchestrator | 2026-02-18 05:40:52.592704 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-18 05:40:52.592716 | orchestrator | Wednesday 18 February 2026 05:38:24 +0000 (0:00:01.385) 0:01:48.854 **** 2026-02-18 05:40:52.592728 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.592740 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.592752 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.592764 | orchestrator | 2026-02-18 05:40:52.592776 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-18 05:40:52.592789 | orchestrator | Wednesday 18 February 2026 05:38:25 +0000 (0:00:01.372) 0:01:50.227 **** 2026-02-18 05:40:52.592801 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:40:52.592814 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.592827 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.592839 | orchestrator | 2026-02-18 05:40:52.592851 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 05:40:52.592863 | orchestrator | Wednesday 18 February 2026 05:38:27 +0000 (0:00:01.748) 0:01:51.976 **** 2026-02-18 05:40:52.592875 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:40:52.592888 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.592900 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.592912 | orchestrator | 2026-02-18 05:40:52.592924 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-18 05:40:52.592936 | orchestrator | Wednesday 18 February 2026 05:38:29 +0000 (0:00:01.623) 0:01:53.599 **** 2026-02-18 05:40:52.592948 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:40:52.592961 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.592973 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.592986 | orchestrator | 2026-02-18 05:40:52.592998 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-18 05:40:52.593039 | orchestrator | Wednesday 18 February 2026 05:38:30 +0000 (0:00:01.732) 0:01:55.331 **** 2026-02-18 05:40:52.593050 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:40:52.593061 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:40:52.593071 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:40:52.593082 | orchestrator | 2026-02-18 05:40:52.593093 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-18 05:40:52.593113 | orchestrator | Wednesday 18 February 2026 05:38:32 +0000 (0:00:01.686) 0:01:57.018 **** 2026-02-18 05:40:52.593124 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:40:52.593135 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.593146 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.593156 | orchestrator | 2026-02-18 05:40:52.593167 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-18 05:40:52.593178 | orchestrator | 2026-02-18 05:40:52.593188 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-18 05:40:52.593199 | orchestrator | Wednesday 18 February 2026 05:38:34 +0000 (0:00:02.042) 0:01:59.061 **** 2026-02-18 05:40:52.593210 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:40:52.593220 | orchestrator | 2026-02-18 05:40:52.593245 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-18 05:40:52.593256 | orchestrator | Wednesday 18 February 2026 05:39:01 +0000 (0:00:27.002) 0:02:26.064 **** 2026-02-18 05:40:52.593281 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.593300 | orchestrator | 2026-02-18 05:40:52.593318 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-18 05:40:52.593337 | orchestrator | Wednesday 18 February 2026 05:39:06 +0000 (0:00:04.691) 0:02:30.756 **** 2026-02-18 05:40:52.593354 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.593372 | orchestrator | 2026-02-18 05:40:52.593391 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-18 05:40:52.593408 | orchestrator | 2026-02-18 05:40:52.593426 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-18 05:40:52.593441 | orchestrator | Wednesday 18 February 2026 05:39:09 +0000 (0:00:02.896) 0:02:33.652 **** 2026-02-18 05:40:52.593457 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:40:52.593472 | orchestrator | 2026-02-18 05:40:52.593487 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-18 05:40:52.593527 | orchestrator | Wednesday 18 February 2026 05:39:35 +0000 (0:00:26.754) 0:03:00.407 **** 2026-02-18 05:40:52.593545 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.593564 | orchestrator | 2026-02-18 05:40:52.593580 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-18 05:40:52.593596 | orchestrator | Wednesday 18 February 2026 05:39:41 +0000 (0:00:05.503) 0:03:05.910 **** 2026-02-18 05:40:52.593614 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.593630 | orchestrator | 2026-02-18 05:40:52.593646 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-18 05:40:52.593663 | orchestrator | 2026-02-18 05:40:52.593680 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-18 05:40:52.593699 | orchestrator | Wednesday 18 February 2026 05:39:44 +0000 (0:00:02.874) 0:03:08.785 **** 2026-02-18 05:40:52.593717 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:40:52.593734 | orchestrator | 2026-02-18 05:40:52.593762 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-18 05:40:52.593781 | orchestrator | Wednesday 18 February 2026 05:40:10 +0000 (0:00:25.967) 0:03:34.753 **** 2026-02-18 05:40:52.593799 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-18 05:40:52.593811 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.593822 | orchestrator | 2026-02-18 05:40:52.593833 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-18 05:40:52.593843 | orchestrator | Wednesday 18 February 2026 05:40:18 +0000 (0:00:08.084) 0:03:42.837 **** 2026-02-18 05:40:52.593854 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-18 05:40:52.593865 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-18 05:40:52.593876 | orchestrator | mariadb_bootstrap_restart 2026-02-18 05:40:52.593886 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.593897 | orchestrator | 2026-02-18 05:40:52.593908 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-18 05:40:52.593929 | orchestrator | skipping: no hosts matched 2026-02-18 05:40:52.593940 | orchestrator | 2026-02-18 05:40:52.593951 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-18 05:40:52.593962 | orchestrator | skipping: no hosts matched 2026-02-18 05:40:52.593972 | orchestrator | 2026-02-18 05:40:52.593983 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-18 05:40:52.593994 | orchestrator | 2026-02-18 05:40:52.594103 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-18 05:40:52.594117 | orchestrator | Wednesday 18 February 2026 05:40:22 +0000 (0:00:04.113) 0:03:46.951 **** 2026-02-18 05:40:52.594127 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:40:52.594138 | orchestrator | 2026-02-18 05:40:52.594149 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-18 05:40:52.594168 | orchestrator | Wednesday 18 February 2026 05:40:24 +0000 (0:00:01.946) 0:03:48.897 **** 2026-02-18 05:40:52.594179 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.594190 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.594201 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.594212 | orchestrator | 2026-02-18 05:40:52.594222 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-18 05:40:52.594233 | orchestrator | Wednesday 18 February 2026 05:40:27 +0000 (0:00:03.049) 0:03:51.947 **** 2026-02-18 05:40:52.594244 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.594255 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.594266 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:40:52.594276 | orchestrator | 2026-02-18 05:40:52.594287 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-18 05:40:52.594298 | orchestrator | Wednesday 18 February 2026 05:40:30 +0000 (0:00:03.301) 0:03:55.249 **** 2026-02-18 05:40:52.594308 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.594319 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.594330 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.594340 | orchestrator | 2026-02-18 05:40:52.594351 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-18 05:40:52.594362 | orchestrator | Wednesday 18 February 2026 05:40:33 +0000 (0:00:03.053) 0:03:58.302 **** 2026-02-18 05:40:52.594373 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.594384 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.594394 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:40:52.594405 | orchestrator | 2026-02-18 05:40:52.594416 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-18 05:40:52.594427 | orchestrator | Wednesday 18 February 2026 05:40:37 +0000 (0:00:03.342) 0:04:01.645 **** 2026-02-18 05:40:52.594437 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.594448 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.594459 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.594469 | orchestrator | 2026-02-18 05:40:52.594480 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-18 05:40:52.594491 | orchestrator | Wednesday 18 February 2026 05:40:43 +0000 (0:00:06.715) 0:04:08.360 **** 2026-02-18 05:40:52.594502 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:40:52.594513 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.594523 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.594534 | orchestrator | 2026-02-18 05:40:52.594545 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-18 05:40:52.594555 | orchestrator | Wednesday 18 February 2026 05:40:47 +0000 (0:00:03.623) 0:04:11.984 **** 2026-02-18 05:40:52.594566 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:40:52.594577 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:40:52.594588 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:40:52.594598 | orchestrator | 2026-02-18 05:40:52.594609 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-18 05:40:52.594630 | orchestrator | Wednesday 18 February 2026 05:40:49 +0000 (0:00:01.639) 0:04:13.623 **** 2026-02-18 05:40:52.594641 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:40:52.594652 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:40:52.594663 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:40:52.594673 | orchestrator | 2026-02-18 05:40:52.594684 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-18 05:40:52.594706 | orchestrator | Wednesday 18 February 2026 05:40:52 +0000 (0:00:03.459) 0:04:17.083 **** 2026-02-18 05:41:12.586125 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:41:12.586243 | orchestrator | 2026-02-18 05:41:12.586261 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-18 05:41:12.586274 | orchestrator | Wednesday 18 February 2026 05:40:54 +0000 (0:00:02.044) 0:04:19.128 **** 2026-02-18 05:41:12.586286 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:41:12.586299 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:41:12.586310 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:41:12.586321 | orchestrator | 2026-02-18 05:41:12.586333 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:41:12.586363 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-18 05:41:12.586377 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-18 05:41:12.586388 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-18 05:41:12.586399 | orchestrator | 2026-02-18 05:41:12.586411 | orchestrator | 2026-02-18 05:41:12.586422 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:41:12.586433 | orchestrator | Wednesday 18 February 2026 05:41:12 +0000 (0:00:17.475) 0:04:36.604 **** 2026-02-18 05:41:12.586444 | orchestrator | =============================================================================== 2026-02-18 05:41:12.586455 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 79.73s 2026-02-18 05:41:12.586466 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 18.28s 2026-02-18 05:41:12.586477 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.48s 2026-02-18 05:41:12.586488 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.89s 2026-02-18 05:41:12.586499 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.72s 2026-02-18 05:41:12.586510 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.99s 2026-02-18 05:41:12.586521 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.51s 2026-02-18 05:41:12.586532 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.44s 2026-02-18 05:41:12.586543 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.28s 2026-02-18 05:41:12.586554 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.20s 2026-02-18 05:41:12.586567 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.07s 2026-02-18 05:41:12.586580 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.71s 2026-02-18 05:41:12.586592 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.67s 2026-02-18 05:41:12.586605 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.62s 2026-02-18 05:41:12.586618 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.55s 2026-02-18 05:41:12.586631 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.51s 2026-02-18 05:41:12.586644 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.47s 2026-02-18 05:41:12.586685 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.46s 2026-02-18 05:41:12.586698 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.34s 2026-02-18 05:41:12.586711 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.30s 2026-02-18 05:41:12.893882 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-18 05:41:15.054604 | orchestrator | 2026-02-18 05:41:15 | INFO  | Task dad195dd-bead-4df9-befe-bcbb93d3a9d2 (rabbitmq) was prepared for execution. 2026-02-18 05:41:15.054709 | orchestrator | 2026-02-18 05:41:15 | INFO  | It takes a moment until task dad195dd-bead-4df9-befe-bcbb93d3a9d2 (rabbitmq) has been started and output is visible here. 2026-02-18 05:41:45.368699 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-18 05:41:45.368833 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-18 05:41:45.368864 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-18 05:41:45.368876 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-18 05:41:45.369757 | orchestrator | 2026-02-18 05:41:45.369789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:41:45.369810 | orchestrator | 2026-02-18 05:41:45.369829 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:41:45.369848 | orchestrator | Wednesday 18 February 2026 05:41:20 +0000 (0:00:01.391) 0:00:01.391 **** 2026-02-18 05:41:45.369865 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:41:45.369877 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:41:45.369888 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:41:45.369899 | orchestrator | 2026-02-18 05:41:45.369911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:41:45.369922 | orchestrator | Wednesday 18 February 2026 05:41:21 +0000 (0:00:00.932) 0:00:02.324 **** 2026-02-18 05:41:45.369934 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-18 05:41:45.369945 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-18 05:41:45.369956 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-18 05:41:45.369967 | orchestrator | 2026-02-18 05:41:45.370076 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-18 05:41:45.370090 | orchestrator | 2026-02-18 05:41:45.370101 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-18 05:41:45.370112 | orchestrator | Wednesday 18 February 2026 05:41:22 +0000 (0:00:01.257) 0:00:03.581 **** 2026-02-18 05:41:45.370141 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:41:45.370154 | orchestrator | 2026-02-18 05:41:45.370165 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-18 05:41:45.370176 | orchestrator | Wednesday 18 February 2026 05:41:24 +0000 (0:00:01.115) 0:00:04.696 **** 2026-02-18 05:41:45.370187 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:41:45.370198 | orchestrator | 2026-02-18 05:41:45.370209 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-18 05:41:45.370220 | orchestrator | Wednesday 18 February 2026 05:41:25 +0000 (0:00:01.339) 0:00:06.036 **** 2026-02-18 05:41:45.370231 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:41:45.370243 | orchestrator | 2026-02-18 05:41:45.370254 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-18 05:41:45.370265 | orchestrator | Wednesday 18 February 2026 05:41:27 +0000 (0:00:02.206) 0:00:08.243 **** 2026-02-18 05:41:45.370277 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:41:45.370288 | orchestrator | 2026-02-18 05:41:45.370299 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-18 05:41:45.370336 | orchestrator | Wednesday 18 February 2026 05:41:36 +0000 (0:00:09.020) 0:00:17.263 **** 2026-02-18 05:41:45.370348 | orchestrator | ok: [testbed-node-0] => { 2026-02-18 05:41:45.370359 | orchestrator |  "changed": false, 2026-02-18 05:41:45.370369 | orchestrator |  "msg": "All assertions passed" 2026-02-18 05:41:45.370381 | orchestrator | } 2026-02-18 05:41:45.370392 | orchestrator | 2026-02-18 05:41:45.370404 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-18 05:41:45.370415 | orchestrator | Wednesday 18 February 2026 05:41:37 +0000 (0:00:00.356) 0:00:17.619 **** 2026-02-18 05:41:45.370426 | orchestrator | ok: [testbed-node-0] => { 2026-02-18 05:41:45.370437 | orchestrator |  "changed": false, 2026-02-18 05:41:45.370448 | orchestrator |  "msg": "All assertions passed" 2026-02-18 05:41:45.370459 | orchestrator | } 2026-02-18 05:41:45.370470 | orchestrator | 2026-02-18 05:41:45.370481 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-18 05:41:45.370492 | orchestrator | Wednesday 18 February 2026 05:41:37 +0000 (0:00:00.706) 0:00:18.325 **** 2026-02-18 05:41:45.370503 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:41:45.370514 | orchestrator | 2026-02-18 05:41:45.370525 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-18 05:41:45.370536 | orchestrator | Wednesday 18 February 2026 05:41:38 +0000 (0:00:00.965) 0:00:19.290 **** 2026-02-18 05:41:45.370547 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:41:45.370558 | orchestrator | 2026-02-18 05:41:45.370568 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-18 05:41:45.370579 | orchestrator | Wednesday 18 February 2026 05:41:39 +0000 (0:00:01.167) 0:00:20.458 **** 2026-02-18 05:41:45.370590 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:41:45.370601 | orchestrator | 2026-02-18 05:41:45.370612 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-18 05:41:45.370622 | orchestrator | Wednesday 18 February 2026 05:41:41 +0000 (0:00:02.115) 0:00:22.573 **** 2026-02-18 05:41:45.370633 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:41:45.370644 | orchestrator | 2026-02-18 05:41:45.370655 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-18 05:41:45.370666 | orchestrator | Wednesday 18 February 2026 05:41:43 +0000 (0:00:01.147) 0:00:23.721 **** 2026-02-18 05:41:45.370704 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:45.370726 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:45.370748 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:45.370760 | orchestrator | 2026-02-18 05:41:45.370772 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-18 05:41:45.370783 | orchestrator | Wednesday 18 February 2026 05:41:43 +0000 (0:00:00.787) 0:00:24.509 **** 2026-02-18 05:41:45.370802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:56.885393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:56.885701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:56.885743 | orchestrator | 2026-02-18 05:41:56.885769 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-18 05:41:56.885792 | orchestrator | Wednesday 18 February 2026 05:41:45 +0000 (0:00:01.435) 0:00:25.945 **** 2026-02-18 05:41:56.885815 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-18 05:41:56.885838 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-18 05:41:56.885860 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-18 05:41:56.885882 | orchestrator | 2026-02-18 05:41:56.885905 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-18 05:41:56.885929 | orchestrator | Wednesday 18 February 2026 05:41:46 +0000 (0:00:01.372) 0:00:27.317 **** 2026-02-18 05:41:56.885949 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-18 05:41:56.885997 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-18 05:41:56.886017 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-18 05:41:56.886036 | orchestrator | 2026-02-18 05:41:56.886055 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-18 05:41:56.886074 | orchestrator | Wednesday 18 February 2026 05:41:48 +0000 (0:00:02.074) 0:00:29.392 **** 2026-02-18 05:41:56.886175 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-18 05:41:56.886197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-18 05:41:56.886216 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-18 05:41:56.886239 | orchestrator | 2026-02-18 05:41:56.886260 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-18 05:41:56.886280 | orchestrator | Wednesday 18 February 2026 05:41:50 +0000 (0:00:01.367) 0:00:30.759 **** 2026-02-18 05:41:56.886299 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-18 05:41:56.886318 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-18 05:41:56.886336 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-18 05:41:56.886355 | orchestrator | 2026-02-18 05:41:56.886375 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-18 05:41:56.886422 | orchestrator | Wednesday 18 February 2026 05:41:51 +0000 (0:00:01.335) 0:00:32.095 **** 2026-02-18 05:41:56.886442 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-18 05:41:56.886461 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-18 05:41:56.886498 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-18 05:41:56.886516 | orchestrator | 2026-02-18 05:41:56.886535 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-18 05:41:56.886554 | orchestrator | Wednesday 18 February 2026 05:41:52 +0000 (0:00:01.372) 0:00:33.467 **** 2026-02-18 05:41:56.886573 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-18 05:41:56.886591 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-18 05:41:56.886610 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-18 05:41:56.886629 | orchestrator | 2026-02-18 05:41:56.886648 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-18 05:41:56.886666 | orchestrator | Wednesday 18 February 2026 05:41:54 +0000 (0:00:01.539) 0:00:35.007 **** 2026-02-18 05:41:56.886685 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:41:56.886705 | orchestrator | 2026-02-18 05:41:56.886723 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-18 05:41:56.886742 | orchestrator | Wednesday 18 February 2026 05:41:55 +0000 (0:00:01.002) 0:00:36.010 **** 2026-02-18 05:41:56.886774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:56.886798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:41:56.886835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:42:02.115362 | orchestrator | 2026-02-18 05:42:02.115471 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-18 05:42:02.115488 | orchestrator | Wednesday 18 February 2026 05:41:56 +0000 (0:00:01.433) 0:00:37.443 **** 2026-02-18 05:42:02.115521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:02.115539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:02.115553 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:42:02.115566 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:42:02.115578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:02.115611 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:42:02.115622 | orchestrator | 2026-02-18 05:42:02.115634 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-18 05:42:02.115645 | orchestrator | Wednesday 18 February 2026 05:41:57 +0000 (0:00:00.456) 0:00:37.900 **** 2026-02-18 05:42:02.115676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:02.115689 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:42:02.115706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:02.115718 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:42:02.115730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:02.115741 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:42:02.115760 | orchestrator | 2026-02-18 05:42:02.115771 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-18 05:42:02.115782 | orchestrator | Wednesday 18 February 2026 05:41:58 +0000 (0:00:01.030) 0:00:38.931 **** 2026-02-18 05:42:02.115793 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:42:02.115804 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:42:02.115815 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:42:02.115826 | orchestrator | 2026-02-18 05:42:02.115837 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-18 05:42:02.115847 | orchestrator | Wednesday 18 February 2026 05:42:00 +0000 (0:00:02.513) 0:00:41.445 **** 2026-02-18 05:42:02.115867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:42:55.193878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:42:55.194081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-18 05:42:55.194098 | orchestrator | 2026-02-18 05:42:55.194107 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-18 05:42:55.194142 | orchestrator | Wednesday 18 February 2026 05:42:02 +0000 (0:00:01.250) 0:00:42.695 **** 2026-02-18 05:42:55.194151 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:42:55.194161 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:42:55.194169 | orchestrator | } 2026-02-18 05:42:55.194177 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:42:55.194185 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:42:55.194191 | orchestrator | } 2026-02-18 05:42:55.194199 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:42:55.194206 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:42:55.194214 | orchestrator | } 2026-02-18 05:42:55.194221 | orchestrator | 2026-02-18 05:42:55.194229 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:42:55.194236 | orchestrator | Wednesday 18 February 2026 05:42:02 +0000 (0:00:00.422) 0:00:43.118 **** 2026-02-18 05:42:55.194329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:55.194366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:55.194376 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-18 05:42:55.194384 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-18 05:42:55.194400 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:42:55.194408 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:42:55.194416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-18 05:42:55.194433 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:42:55.194441 | orchestrator | 2026-02-18 05:42:55.194449 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-18 05:42:55.194458 | orchestrator | Wednesday 18 February 2026 05:42:03 +0000 (0:00:01.264) 0:00:44.382 **** 2026-02-18 05:42:55.194466 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:42:55.194473 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:42:55.194482 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:42:55.194490 | orchestrator | 2026-02-18 05:42:55.194499 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-18 05:42:55.194507 | orchestrator | 2026-02-18 05:42:55.194515 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-18 05:42:55.194523 | orchestrator | Wednesday 18 February 2026 05:42:04 +0000 (0:00:00.956) 0:00:45.338 **** 2026-02-18 05:42:55.194531 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:42:55.194540 | orchestrator | 2026-02-18 05:42:55.194548 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-18 05:42:55.194556 | orchestrator | Wednesday 18 February 2026 05:42:05 +0000 (0:00:01.127) 0:00:46.466 **** 2026-02-18 05:42:55.194562 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:42:55.194569 | orchestrator | 2026-02-18 05:42:55.194577 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-18 05:42:55.194586 | orchestrator | Wednesday 18 February 2026 05:42:14 +0000 (0:00:08.458) 0:00:54.925 **** 2026-02-18 05:42:55.194594 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:42:55.194602 | orchestrator | 2026-02-18 05:42:55.194609 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-18 05:42:55.194617 | orchestrator | Wednesday 18 February 2026 05:42:22 +0000 (0:00:08.055) 0:01:02.981 **** 2026-02-18 05:42:55.194623 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:42:55.194631 | orchestrator | 2026-02-18 05:42:55.194638 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-18 05:42:55.194645 | orchestrator | 2026-02-18 05:42:55.194653 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-18 05:42:55.194660 | orchestrator | Wednesday 18 February 2026 05:42:32 +0000 (0:00:09.853) 0:01:12.835 **** 2026-02-18 05:42:55.194668 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:42:55.194676 | orchestrator | 2026-02-18 05:42:55.194683 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-18 05:42:55.194689 | orchestrator | Wednesday 18 February 2026 05:42:33 +0000 (0:00:01.114) 0:01:13.949 **** 2026-02-18 05:42:55.194695 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:42:55.194701 | orchestrator | 2026-02-18 05:42:55.194707 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-18 05:42:55.194713 | orchestrator | Wednesday 18 February 2026 05:42:41 +0000 (0:00:08.228) 0:01:22.178 **** 2026-02-18 05:42:55.194727 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:43:42.185695 | orchestrator | 2026-02-18 05:43:42.185834 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-18 05:43:42.185853 | orchestrator | Wednesday 18 February 2026 05:42:55 +0000 (0:00:13.592) 0:01:35.771 **** 2026-02-18 05:43:42.185867 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:43:42.185879 | orchestrator | 2026-02-18 05:43:42.185891 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-18 05:43:42.185999 | orchestrator | 2026-02-18 05:43:42.186065 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-18 05:43:42.186080 | orchestrator | Wednesday 18 February 2026 05:43:05 +0000 (0:00:10.462) 0:01:46.233 **** 2026-02-18 05:43:42.186091 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:43:42.186111 | orchestrator | 2026-02-18 05:43:42.186123 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-18 05:43:42.186134 | orchestrator | Wednesday 18 February 2026 05:43:06 +0000 (0:00:01.170) 0:01:47.404 **** 2026-02-18 05:43:42.186145 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:43:42.186155 | orchestrator | 2026-02-18 05:43:42.186166 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-18 05:43:42.186190 | orchestrator | Wednesday 18 February 2026 05:43:14 +0000 (0:00:07.882) 0:01:55.287 **** 2026-02-18 05:43:42.186202 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:43:42.186213 | orchestrator | 2026-02-18 05:43:42.186223 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-18 05:43:42.186234 | orchestrator | Wednesday 18 February 2026 05:43:27 +0000 (0:00:13.002) 0:02:08.290 **** 2026-02-18 05:43:42.186245 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:43:42.186255 | orchestrator | 2026-02-18 05:43:42.186266 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-18 05:43:42.186277 | orchestrator | 2026-02-18 05:43:42.186287 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-18 05:43:42.186298 | orchestrator | Wednesday 18 February 2026 05:43:37 +0000 (0:00:09.465) 0:02:17.755 **** 2026-02-18 05:43:42.186309 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:43:42.186319 | orchestrator | 2026-02-18 05:43:42.186330 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-18 05:43:42.186341 | orchestrator | Wednesday 18 February 2026 05:43:37 +0000 (0:00:00.546) 0:02:18.301 **** 2026-02-18 05:43:42.186352 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:43:42.186362 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:43:42.186373 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:43:42.186383 | orchestrator | 2026-02-18 05:43:42.186394 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:43:42.186406 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 05:43:42.186418 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 05:43:42.186429 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-18 05:43:42.186439 | orchestrator | 2026-02-18 05:43:42.186450 | orchestrator | 2026-02-18 05:43:42.186461 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:43:42.186472 | orchestrator | Wednesday 18 February 2026 05:43:41 +0000 (0:00:04.037) 0:02:22.339 **** 2026-02-18 05:43:42.186482 | orchestrator | =============================================================================== 2026-02-18 05:43:42.186493 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 34.65s 2026-02-18 05:43:42.186504 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.78s 2026-02-18 05:43:42.186514 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 24.57s 2026-02-18 05:43:42.186525 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.02s 2026-02-18 05:43:42.186536 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.04s 2026-02-18 05:43:42.186547 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.41s 2026-02-18 05:43:42.186558 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.51s 2026-02-18 05:43:42.186577 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.21s 2026-02-18 05:43:42.186588 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.12s 2026-02-18 05:43:42.186667 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.07s 2026-02-18 05:43:42.186678 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.54s 2026-02-18 05:43:42.186689 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.44s 2026-02-18 05:43:42.186700 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.43s 2026-02-18 05:43:42.186711 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.37s 2026-02-18 05:43:42.186721 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.37s 2026-02-18 05:43:42.186733 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.37s 2026-02-18 05:43:42.186743 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.34s 2026-02-18 05:43:42.186754 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.34s 2026-02-18 05:43:42.186765 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.26s 2026-02-18 05:43:42.186775 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.26s 2026-02-18 05:43:42.505678 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-18 05:43:44.597350 | orchestrator | 2026-02-18 05:43:44 | INFO  | Task 48da8ba1-c35d-47fe-945f-9f2f5603ea9c (openvswitch) was prepared for execution. 2026-02-18 05:43:44.597449 | orchestrator | 2026-02-18 05:43:44 | INFO  | It takes a moment until task 48da8ba1-c35d-47fe-945f-9f2f5603ea9c (openvswitch) has been started and output is visible here. 2026-02-18 05:44:13.715182 | orchestrator | 2026-02-18 05:44:13.715285 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:44:13.715299 | orchestrator | 2026-02-18 05:44:13.715307 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:44:13.715316 | orchestrator | Wednesday 18 February 2026 05:43:51 +0000 (0:00:02.327) 0:00:02.328 **** 2026-02-18 05:44:13.715331 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:44:13.715346 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:44:13.715356 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:44:13.715365 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:44:13.715375 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:44:13.715382 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:44:13.715391 | orchestrator | 2026-02-18 05:44:13.715416 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:44:13.715425 | orchestrator | Wednesday 18 February 2026 05:43:54 +0000 (0:00:02.882) 0:00:05.210 **** 2026-02-18 05:44:13.715434 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 05:44:13.715442 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 05:44:13.715450 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 05:44:13.715459 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 05:44:13.715466 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 05:44:13.715473 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-18 05:44:13.715480 | orchestrator | 2026-02-18 05:44:13.715488 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-18 05:44:13.715496 | orchestrator | 2026-02-18 05:44:13.715503 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-18 05:44:13.715511 | orchestrator | Wednesday 18 February 2026 05:43:56 +0000 (0:00:02.539) 0:00:07.750 **** 2026-02-18 05:44:13.715519 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:44:13.715550 | orchestrator | 2026-02-18 05:44:13.715558 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-18 05:44:13.715566 | orchestrator | Wednesday 18 February 2026 05:44:00 +0000 (0:00:03.683) 0:00:11.433 **** 2026-02-18 05:44:13.715573 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-18 05:44:13.715581 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-18 05:44:13.715588 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-18 05:44:13.715596 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-18 05:44:13.715603 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-18 05:44:13.715611 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-18 05:44:13.715619 | orchestrator | 2026-02-18 05:44:13.715626 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-18 05:44:13.715633 | orchestrator | Wednesday 18 February 2026 05:44:02 +0000 (0:00:02.508) 0:00:13.942 **** 2026-02-18 05:44:13.715639 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-18 05:44:13.715646 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-18 05:44:13.715653 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-18 05:44:13.715660 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-18 05:44:13.715667 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-18 05:44:13.715674 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-18 05:44:13.715684 | orchestrator | 2026-02-18 05:44:13.715696 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-18 05:44:13.715704 | orchestrator | Wednesday 18 February 2026 05:44:05 +0000 (0:00:03.064) 0:00:17.007 **** 2026-02-18 05:44:13.715712 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-18 05:44:13.715720 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:44:13.715729 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-18 05:44:13.715738 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:44:13.715747 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-18 05:44:13.715755 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:44:13.715762 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-18 05:44:13.715771 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:44:13.715779 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-18 05:44:13.715788 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:44:13.715797 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-18 05:44:13.715805 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:44:13.715815 | orchestrator | 2026-02-18 05:44:13.715823 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-18 05:44:13.715832 | orchestrator | Wednesday 18 February 2026 05:44:08 +0000 (0:00:02.944) 0:00:19.951 **** 2026-02-18 05:44:13.715841 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:44:13.715850 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:44:13.715858 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:44:13.715867 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:44:13.715875 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:44:13.715884 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:44:13.715915 | orchestrator | 2026-02-18 05:44:13.715976 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-18 05:44:13.715987 | orchestrator | Wednesday 18 February 2026 05:44:11 +0000 (0:00:02.273) 0:00:22.225 **** 2026-02-18 05:44:13.716021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:13.716054 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:13.716063 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:13.716072 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:13.716080 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:13.716088 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:13.716112 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073433 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073522 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073555 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073567 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073575 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073605 | orchestrator | 2026-02-18 05:44:16.073616 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-18 05:44:16.073626 | orchestrator | Wednesday 18 February 2026 05:44:13 +0000 (0:00:02.608) 0:00:24.834 **** 2026-02-18 05:44:16.073660 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073670 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073679 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073687 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073695 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073713 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:16.073729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775053 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775164 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775181 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775193 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775244 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775258 | orchestrator | 2026-02-18 05:44:21.775272 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-18 05:44:21.775284 | orchestrator | Wednesday 18 February 2026 05:44:17 +0000 (0:00:03.539) 0:00:28.373 **** 2026-02-18 05:44:21.775296 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:44:21.775308 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:44:21.775319 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:44:21.775330 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:44:21.775341 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:44:21.775352 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:44:21.775363 | orchestrator | 2026-02-18 05:44:21.775374 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-18 05:44:21.775401 | orchestrator | Wednesday 18 February 2026 05:44:19 +0000 (0:00:02.531) 0:00:30.905 **** 2026-02-18 05:44:21.775414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:21.775498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-18 05:44:25.728951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:25.729091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:25.729135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:25.729149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:25.729176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:25.729209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-18 05:44:25.729222 | orchestrator | 2026-02-18 05:44:25.729237 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-18 05:44:25.729249 | orchestrator | Wednesday 18 February 2026 05:44:23 +0000 (0:00:03.392) 0:00:34.297 **** 2026-02-18 05:44:25.729262 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:44:25.729277 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:44:25.729289 | orchestrator | } 2026-02-18 05:44:25.729302 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:44:25.729315 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:44:25.729326 | orchestrator | } 2026-02-18 05:44:25.729339 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:44:25.729351 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:44:25.729363 | orchestrator | } 2026-02-18 05:44:25.729376 | orchestrator | changed: [testbed-node-3] => { 2026-02-18 05:44:25.729389 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:44:25.729408 | orchestrator | } 2026-02-18 05:44:25.729421 | orchestrator | changed: [testbed-node-4] => { 2026-02-18 05:44:25.729433 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:44:25.729445 | orchestrator | } 2026-02-18 05:44:25.729458 | orchestrator | changed: [testbed-node-5] => { 2026-02-18 05:44:25.729470 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:44:25.729482 | orchestrator | } 2026-02-18 05:44:25.729495 | orchestrator | 2026-02-18 05:44:25.729508 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:44:25.729519 | orchestrator | Wednesday 18 February 2026 05:44:25 +0000 (0:00:02.057) 0:00:36.355 **** 2026-02-18 05:44:25.729531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-18 05:44:25.729544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-18 05:44:25.729556 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:44:25.729573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-18 05:44:25.729586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-18 05:44:25.729604 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:44:56.966268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-18 05:44:56.966414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-18 05:44:56.966433 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:44:56.966448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-18 05:44:56.966509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-18 05:44:56.966522 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:44:56.966534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-18 05:44:56.966564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-18 05:44:56.966585 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:44:56.966597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-18 05:44:56.966608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-18 05:44:56.966619 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:44:56.966630 | orchestrator | 2026-02-18 05:44:56.966643 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 05:44:56.966656 | orchestrator | Wednesday 18 February 2026 05:44:27 +0000 (0:00:02.627) 0:00:38.982 **** 2026-02-18 05:44:56.966667 | orchestrator | 2026-02-18 05:44:56.966677 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 05:44:56.966688 | orchestrator | Wednesday 18 February 2026 05:44:28 +0000 (0:00:00.574) 0:00:39.557 **** 2026-02-18 05:44:56.966699 | orchestrator | 2026-02-18 05:44:56.966710 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 05:44:56.966720 | orchestrator | Wednesday 18 February 2026 05:44:28 +0000 (0:00:00.526) 0:00:40.084 **** 2026-02-18 05:44:56.966731 | orchestrator | 2026-02-18 05:44:56.966742 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 05:44:56.966753 | orchestrator | Wednesday 18 February 2026 05:44:29 +0000 (0:00:00.530) 0:00:40.615 **** 2026-02-18 05:44:56.966763 | orchestrator | 2026-02-18 05:44:56.966774 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 05:44:56.966785 | orchestrator | Wednesday 18 February 2026 05:44:30 +0000 (0:00:00.749) 0:00:41.364 **** 2026-02-18 05:44:56.966795 | orchestrator | 2026-02-18 05:44:56.966806 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-18 05:44:56.966818 | orchestrator | Wednesday 18 February 2026 05:44:30 +0000 (0:00:00.527) 0:00:41.892 **** 2026-02-18 05:44:56.966829 | orchestrator | 2026-02-18 05:44:56.966852 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-18 05:44:56.966949 | orchestrator | Wednesday 18 February 2026 05:44:31 +0000 (0:00:00.865) 0:00:42.757 **** 2026-02-18 05:44:56.966968 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:44:56.966979 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:44:56.966990 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:44:56.967001 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:44:56.967011 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:44:56.967022 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:44:56.967054 | orchestrator | 2026-02-18 05:44:56.967065 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-18 05:44:56.967077 | orchestrator | Wednesday 18 February 2026 05:44:43 +0000 (0:00:11.745) 0:00:54.503 **** 2026-02-18 05:44:56.967088 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:44:56.967099 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:44:56.967110 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:44:56.967121 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:44:56.967132 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:44:56.967142 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:44:56.967153 | orchestrator | 2026-02-18 05:44:56.967164 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-18 05:44:56.967175 | orchestrator | Wednesday 18 February 2026 05:44:45 +0000 (0:00:02.347) 0:00:56.850 **** 2026-02-18 05:44:56.967186 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:44:56.967197 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:44:56.967207 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:44:56.967218 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:44:56.967229 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:44:56.967240 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:44:56.967251 | orchestrator | 2026-02-18 05:44:56.967262 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-18 05:44:56.967283 | orchestrator | Wednesday 18 February 2026 05:44:56 +0000 (0:00:11.235) 0:01:08.086 **** 2026-02-18 05:45:12.401846 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-18 05:45:12.402098 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-18 05:45:12.402120 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-18 05:45:12.402133 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-18 05:45:12.402144 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-18 05:45:12.402155 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-18 05:45:12.402167 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-18 05:45:12.402178 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-18 05:45:12.402188 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-18 05:45:12.402199 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-18 05:45:12.402210 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-18 05:45:12.402221 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-18 05:45:12.402232 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 05:45:12.402243 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 05:45:12.402254 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 05:45:12.402265 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 05:45:12.402276 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 05:45:12.402287 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-18 05:45:12.402321 | orchestrator | 2026-02-18 05:45:12.402334 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-18 05:45:12.402346 | orchestrator | Wednesday 18 February 2026 05:45:04 +0000 (0:00:07.505) 0:01:15.592 **** 2026-02-18 05:45:12.402359 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-18 05:45:12.402370 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:45:12.402383 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-18 05:45:12.402394 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:45:12.402404 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-18 05:45:12.402415 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:45:12.402426 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-18 05:45:12.402437 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-18 05:45:12.402448 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-18 05:45:12.402459 | orchestrator | 2026-02-18 05:45:12.402470 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-18 05:45:12.402481 | orchestrator | Wednesday 18 February 2026 05:45:07 +0000 (0:00:03.291) 0:01:18.884 **** 2026-02-18 05:45:12.402507 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-18 05:45:12.402518 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:45:12.402529 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-18 05:45:12.402540 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:45:12.402551 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-18 05:45:12.402562 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:45:12.402572 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-18 05:45:12.402583 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-18 05:45:12.402594 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-18 05:45:12.402605 | orchestrator | 2026-02-18 05:45:12.402615 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:45:12.402628 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 05:45:12.402640 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 05:45:12.402651 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-18 05:45:12.402662 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:45:12.402693 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:45:12.402705 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:45:12.402716 | orchestrator | 2026-02-18 05:45:12.402727 | orchestrator | 2026-02-18 05:45:12.402738 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:45:12.402749 | orchestrator | Wednesday 18 February 2026 05:45:11 +0000 (0:00:04.157) 0:01:23.042 **** 2026-02-18 05:45:12.402760 | orchestrator | =============================================================================== 2026-02-18 05:45:12.402771 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.75s 2026-02-18 05:45:12.402781 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.24s 2026-02-18 05:45:12.402792 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.50s 2026-02-18 05:45:12.402803 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.16s 2026-02-18 05:45:12.402821 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.77s 2026-02-18 05:45:12.402832 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.68s 2026-02-18 05:45:12.402843 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.54s 2026-02-18 05:45:12.402854 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.39s 2026-02-18 05:45:12.402896 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.29s 2026-02-18 05:45:12.402916 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.06s 2026-02-18 05:45:12.402933 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.94s 2026-02-18 05:45:12.402949 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.88s 2026-02-18 05:45:12.402966 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.63s 2026-02-18 05:45:12.402982 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.61s 2026-02-18 05:45:12.402997 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.54s 2026-02-18 05:45:12.403013 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.53s 2026-02-18 05:45:12.403030 | orchestrator | module-load : Load modules ---------------------------------------------- 2.51s 2026-02-18 05:45:12.403047 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.35s 2026-02-18 05:45:12.403066 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.27s 2026-02-18 05:45:12.403084 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.06s 2026-02-18 05:45:12.751442 | orchestrator | + osism apply -a upgrade ovn 2026-02-18 05:45:14.889895 | orchestrator | 2026-02-18 05:45:14 | INFO  | Task b1e2b3b0-d662-4582-85aa-7b0fc2eed92c (ovn) was prepared for execution. 2026-02-18 05:45:14.890108 | orchestrator | 2026-02-18 05:45:14 | INFO  | It takes a moment until task b1e2b3b0-d662-4582-85aa-7b0fc2eed92c (ovn) has been started and output is visible here. 2026-02-18 05:45:37.498522 | orchestrator | 2026-02-18 05:45:37.498656 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-18 05:45:37.498675 | orchestrator | 2026-02-18 05:45:37.498688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-18 05:45:37.498700 | orchestrator | Wednesday 18 February 2026 05:45:20 +0000 (0:00:01.374) 0:00:01.374 **** 2026-02-18 05:45:37.498711 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:45:37.498759 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:45:37.498773 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:45:37.498800 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:45:37.498811 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:45:37.498822 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:45:37.498833 | orchestrator | 2026-02-18 05:45:37.498844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-18 05:45:37.498894 | orchestrator | Wednesday 18 February 2026 05:45:23 +0000 (0:00:03.245) 0:00:04.619 **** 2026-02-18 05:45:37.498906 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-18 05:45:37.498917 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-18 05:45:37.498928 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-18 05:45:37.498939 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-18 05:45:37.498950 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-18 05:45:37.498961 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-18 05:45:37.498972 | orchestrator | 2026-02-18 05:45:37.498983 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-18 05:45:37.498994 | orchestrator | 2026-02-18 05:45:37.499004 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-18 05:45:37.499016 | orchestrator | Wednesday 18 February 2026 05:45:28 +0000 (0:00:04.266) 0:00:08.885 **** 2026-02-18 05:45:37.499051 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:45:37.499067 | orchestrator | 2026-02-18 05:45:37.499080 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-18 05:45:37.499093 | orchestrator | Wednesday 18 February 2026 05:45:31 +0000 (0:00:03.033) 0:00:11.919 **** 2026-02-18 05:45:37.499108 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499123 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499135 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499146 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499157 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499187 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499199 | orchestrator | 2026-02-18 05:45:37.499210 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-18 05:45:37.499221 | orchestrator | Wednesday 18 February 2026 05:45:33 +0000 (0:00:02.429) 0:00:14.349 **** 2026-02-18 05:45:37.499238 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499250 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499270 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499281 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499292 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499303 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499314 | orchestrator | 2026-02-18 05:45:37.499325 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-18 05:45:37.499336 | orchestrator | Wednesday 18 February 2026 05:45:36 +0000 (0:00:02.962) 0:00:17.312 **** 2026-02-18 05:45:37.499346 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499358 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:37.499376 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076503 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076640 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076656 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076670 | orchestrator | 2026-02-18 05:45:47.076683 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-18 05:45:47.076696 | orchestrator | Wednesday 18 February 2026 05:45:39 +0000 (0:00:02.640) 0:00:19.952 **** 2026-02-18 05:45:47.076707 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076719 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076741 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076753 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076787 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076808 | orchestrator | 2026-02-18 05:45:47.076820 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-18 05:45:47.076831 | orchestrator | Wednesday 18 February 2026 05:45:42 +0000 (0:00:03.156) 0:00:23.109 **** 2026-02-18 05:45:47.076896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:45:47.076969 | orchestrator | 2026-02-18 05:45:47.076980 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-18 05:45:47.076992 | orchestrator | Wednesday 18 February 2026 05:45:44 +0000 (0:00:02.651) 0:00:25.760 **** 2026-02-18 05:45:47.077003 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:45:47.077016 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:45:47.077029 | orchestrator | } 2026-02-18 05:45:47.077041 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:45:47.077053 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:45:47.077066 | orchestrator | } 2026-02-18 05:45:47.077089 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:45:47.077102 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:45:47.077114 | orchestrator | } 2026-02-18 05:45:47.077126 | orchestrator | changed: [testbed-node-3] => { 2026-02-18 05:45:47.077139 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:45:47.077151 | orchestrator | } 2026-02-18 05:45:47.077164 | orchestrator | changed: [testbed-node-4] => { 2026-02-18 05:45:47.077176 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:45:47.077189 | orchestrator | } 2026-02-18 05:45:47.077202 | orchestrator | changed: [testbed-node-5] => { 2026-02-18 05:45:47.077214 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:45:47.077226 | orchestrator | } 2026-02-18 05:45:47.077239 | orchestrator | 2026-02-18 05:45:47.077252 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:45:47.077264 | orchestrator | Wednesday 18 February 2026 05:45:46 +0000 (0:00:02.040) 0:00:27.801 **** 2026-02-18 05:45:47.077293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:46:16.541116 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:46:16.541238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:46:16.541260 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:46:16.541273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:46:16.541285 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:46:16.541296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:46:16.541307 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:46:16.541318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:46:16.541330 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:46:16.541341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:46:16.541379 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:46:16.541391 | orchestrator | 2026-02-18 05:46:16.541404 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-18 05:46:16.541417 | orchestrator | Wednesday 18 February 2026 05:45:49 +0000 (0:00:02.647) 0:00:30.449 **** 2026-02-18 05:46:16.541428 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:46:16.541440 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:46:16.541451 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:46:16.541462 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:46:16.541472 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:46:16.541483 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:46:16.541493 | orchestrator | 2026-02-18 05:46:16.541504 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-18 05:46:16.541515 | orchestrator | Wednesday 18 February 2026 05:45:53 +0000 (0:00:03.680) 0:00:34.130 **** 2026-02-18 05:46:16.541526 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-18 05:46:16.541537 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-18 05:46:16.541548 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-18 05:46:16.541559 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-18 05:46:16.541569 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-18 05:46:16.541580 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-18 05:46:16.541590 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 05:46:16.541601 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 05:46:16.541611 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 05:46:16.541637 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 05:46:16.541649 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 05:46:16.541679 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-18 05:46:16.541693 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-18 05:46:16.541707 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-18 05:46:16.541720 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-18 05:46:16.541732 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-18 05:46:16.541745 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-18 05:46:16.541757 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-18 05:46:16.541770 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 05:46:16.541782 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 05:46:16.541795 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 05:46:16.541808 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 05:46:16.541869 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 05:46:16.541884 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-18 05:46:16.541897 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 05:46:16.541909 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 05:46:16.541920 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 05:46:16.541933 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 05:46:16.541945 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 05:46:16.541957 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-18 05:46:16.541968 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 05:46:16.541981 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 05:46:16.541994 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 05:46:16.542005 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 05:46:16.542075 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 05:46:16.542089 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-18 05:46:16.542100 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-18 05:46:16.542111 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-18 05:46:16.542122 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-18 05:46:16.542132 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-18 05:46:16.542143 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-18 05:46:16.542162 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-18 05:46:16.542175 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-18 05:46:16.542194 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-18 05:46:16.542205 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-18 05:46:16.542215 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-18 05:46:16.542232 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-18 05:46:16.542251 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-18 05:49:05.839670 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-18 05:49:05.839822 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-18 05:49:05.839839 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-18 05:49:05.839850 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-18 05:49:05.839883 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-18 05:49:05.839893 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-18 05:49:05.839903 | orchestrator | 2026-02-18 05:49:05.839914 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 05:49:05.839925 | orchestrator | Wednesday 18 February 2026 05:46:13 +0000 (0:00:20.175) 0:00:54.305 **** 2026-02-18 05:49:05.839935 | orchestrator | 2026-02-18 05:49:05.839944 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 05:49:05.839954 | orchestrator | Wednesday 18 February 2026 05:46:13 +0000 (0:00:00.454) 0:00:54.760 **** 2026-02-18 05:49:05.839964 | orchestrator | 2026-02-18 05:49:05.839977 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 05:49:05.839993 | orchestrator | Wednesday 18 February 2026 05:46:14 +0000 (0:00:00.468) 0:00:55.228 **** 2026-02-18 05:49:05.840010 | orchestrator | 2026-02-18 05:49:05.840026 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 05:49:05.840041 | orchestrator | Wednesday 18 February 2026 05:46:14 +0000 (0:00:00.459) 0:00:55.688 **** 2026-02-18 05:49:05.840057 | orchestrator | 2026-02-18 05:49:05.840072 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 05:49:05.840087 | orchestrator | Wednesday 18 February 2026 05:46:15 +0000 (0:00:00.428) 0:00:56.116 **** 2026-02-18 05:49:05.840101 | orchestrator | 2026-02-18 05:49:05.840115 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-18 05:49:05.840128 | orchestrator | Wednesday 18 February 2026 05:46:15 +0000 (0:00:00.426) 0:00:56.543 **** 2026-02-18 05:49:05.840145 | orchestrator | 2026-02-18 05:49:05.840161 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-18 05:49:05.840178 | orchestrator | Wednesday 18 February 2026 05:46:16 +0000 (0:00:00.790) 0:00:57.333 **** 2026-02-18 05:49:05.840194 | orchestrator | 2026-02-18 05:49:05.840213 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-18 05:49:05.840233 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:49:05.840247 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:49:05.840259 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:49:05.840270 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:49:05.840282 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:49:05.840293 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:49:05.840304 | orchestrator | 2026-02-18 05:49:05.840315 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-18 05:49:05.840326 | orchestrator | 2026-02-18 05:49:05.840338 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-18 05:49:05.840349 | orchestrator | Wednesday 18 February 2026 05:48:28 +0000 (0:02:12.160) 0:03:09.493 **** 2026-02-18 05:49:05.840361 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:49:05.840372 | orchestrator | 2026-02-18 05:49:05.840383 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-18 05:49:05.840394 | orchestrator | Wednesday 18 February 2026 05:48:30 +0000 (0:00:02.047) 0:03:11.541 **** 2026-02-18 05:49:05.840406 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-18 05:49:05.840418 | orchestrator | 2026-02-18 05:49:05.840429 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-18 05:49:05.840440 | orchestrator | Wednesday 18 February 2026 05:48:32 +0000 (0:00:01.954) 0:03:13.495 **** 2026-02-18 05:49:05.840451 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.840463 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.840474 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.840485 | orchestrator | 2026-02-18 05:49:05.840505 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-18 05:49:05.840517 | orchestrator | Wednesday 18 February 2026 05:48:34 +0000 (0:00:01.879) 0:03:15.374 **** 2026-02-18 05:49:05.840528 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.840539 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.840550 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.840562 | orchestrator | 2026-02-18 05:49:05.840573 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-18 05:49:05.840582 | orchestrator | Wednesday 18 February 2026 05:48:36 +0000 (0:00:01.539) 0:03:16.914 **** 2026-02-18 05:49:05.840593 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.840602 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.840612 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.840621 | orchestrator | 2026-02-18 05:49:05.840631 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-18 05:49:05.840641 | orchestrator | Wednesday 18 February 2026 05:48:37 +0000 (0:00:01.459) 0:03:18.373 **** 2026-02-18 05:49:05.840650 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.840674 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.840684 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.840693 | orchestrator | 2026-02-18 05:49:05.840753 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-18 05:49:05.840765 | orchestrator | Wednesday 18 February 2026 05:48:39 +0000 (0:00:01.670) 0:03:20.044 **** 2026-02-18 05:49:05.840775 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.840803 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.840813 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.840823 | orchestrator | 2026-02-18 05:49:05.840833 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-18 05:49:05.840843 | orchestrator | Wednesday 18 February 2026 05:48:40 +0000 (0:00:01.416) 0:03:21.461 **** 2026-02-18 05:49:05.840852 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:49:05.840863 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:49:05.840873 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:49:05.840883 | orchestrator | 2026-02-18 05:49:05.840892 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-18 05:49:05.840902 | orchestrator | Wednesday 18 February 2026 05:48:42 +0000 (0:00:01.445) 0:03:22.906 **** 2026-02-18 05:49:05.840912 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.840922 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.840932 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.840941 | orchestrator | 2026-02-18 05:49:05.840951 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-18 05:49:05.840961 | orchestrator | Wednesday 18 February 2026 05:48:43 +0000 (0:00:01.791) 0:03:24.698 **** 2026-02-18 05:49:05.840972 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.840989 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.841006 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.841021 | orchestrator | 2026-02-18 05:49:05.841036 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-18 05:49:05.841052 | orchestrator | Wednesday 18 February 2026 05:48:45 +0000 (0:00:01.741) 0:03:26.440 **** 2026-02-18 05:49:05.841067 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.841082 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.841097 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.841113 | orchestrator | 2026-02-18 05:49:05.841128 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-18 05:49:05.841146 | orchestrator | Wednesday 18 February 2026 05:48:47 +0000 (0:00:01.830) 0:03:28.271 **** 2026-02-18 05:49:05.841162 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.841178 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.841188 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.841198 | orchestrator | 2026-02-18 05:49:05.841207 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-18 05:49:05.841217 | orchestrator | Wednesday 18 February 2026 05:48:48 +0000 (0:00:01.407) 0:03:29.679 **** 2026-02-18 05:49:05.841236 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:49:05.841246 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:49:05.841256 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:49:05.841266 | orchestrator | 2026-02-18 05:49:05.841275 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-18 05:49:05.841285 | orchestrator | Wednesday 18 February 2026 05:48:50 +0000 (0:00:01.466) 0:03:31.145 **** 2026-02-18 05:49:05.841300 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:49:05.841317 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:49:05.841332 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:49:05.841348 | orchestrator | 2026-02-18 05:49:05.841361 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-18 05:49:05.841377 | orchestrator | Wednesday 18 February 2026 05:48:51 +0000 (0:00:01.373) 0:03:32.519 **** 2026-02-18 05:49:05.841393 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.841410 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.841427 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.841444 | orchestrator | 2026-02-18 05:49:05.841461 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-18 05:49:05.841477 | orchestrator | Wednesday 18 February 2026 05:48:53 +0000 (0:00:01.793) 0:03:34.312 **** 2026-02-18 05:49:05.841494 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.841509 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.841525 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.841541 | orchestrator | 2026-02-18 05:49:05.841556 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-18 05:49:05.841573 | orchestrator | Wednesday 18 February 2026 05:48:54 +0000 (0:00:01.415) 0:03:35.727 **** 2026-02-18 05:49:05.841589 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.841606 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.841623 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.841635 | orchestrator | 2026-02-18 05:49:05.841645 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-18 05:49:05.841655 | orchestrator | Wednesday 18 February 2026 05:48:57 +0000 (0:00:02.152) 0:03:37.880 **** 2026-02-18 05:49:05.841664 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:49:05.841674 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:49:05.841690 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:49:05.841731 | orchestrator | 2026-02-18 05:49:05.841749 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-18 05:49:05.841765 | orchestrator | Wednesday 18 February 2026 05:48:58 +0000 (0:00:01.412) 0:03:39.292 **** 2026-02-18 05:49:05.841781 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:49:05.841799 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:49:05.841813 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:49:05.841824 | orchestrator | 2026-02-18 05:49:05.841833 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-18 05:49:05.841843 | orchestrator | Wednesday 18 February 2026 05:48:59 +0000 (0:00:01.406) 0:03:40.699 **** 2026-02-18 05:49:05.841853 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:49:05.841863 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:49:05.841873 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:49:05.841882 | orchestrator | 2026-02-18 05:49:05.841892 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-18 05:49:05.841902 | orchestrator | Wednesday 18 February 2026 05:49:01 +0000 (0:00:01.759) 0:03:42.458 **** 2026-02-18 05:49:05.841935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208545 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208565 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208586 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208606 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208623 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:12.208756 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:12.208811 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:12.208834 | orchestrator | 2026-02-18 05:49:12.208848 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-18 05:49:12.208861 | orchestrator | Wednesday 18 February 2026 05:49:05 +0000 (0:00:04.218) 0:03:46.676 **** 2026-02-18 05:49:12.208873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208885 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208897 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208923 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:12.208944 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.104949 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:27.105098 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105110 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:27.105175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:27.105189 | orchestrator | 2026-02-18 05:49:27.105203 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-18 05:49:27.105216 | orchestrator | Wednesday 18 February 2026 05:49:12 +0000 (0:00:06.371) 0:03:53.048 **** 2026-02-18 05:49:27.105228 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-18 05:49:27.105239 | orchestrator | 2026-02-18 05:49:27.105250 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-18 05:49:27.105261 | orchestrator | Wednesday 18 February 2026 05:49:14 +0000 (0:00:01.900) 0:03:54.948 **** 2026-02-18 05:49:27.105272 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:49:27.105284 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:49:27.105311 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:49:27.105322 | orchestrator | 2026-02-18 05:49:27.105333 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-18 05:49:27.105344 | orchestrator | Wednesday 18 February 2026 05:49:15 +0000 (0:00:01.755) 0:03:56.704 **** 2026-02-18 05:49:27.105355 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:49:27.105366 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:49:27.105377 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:49:27.105387 | orchestrator | 2026-02-18 05:49:27.105398 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-18 05:49:27.105409 | orchestrator | Wednesday 18 February 2026 05:49:18 +0000 (0:00:02.682) 0:03:59.386 **** 2026-02-18 05:49:27.105420 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:49:27.105431 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:49:27.105441 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:49:27.105452 | orchestrator | 2026-02-18 05:49:27.105463 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-18 05:49:27.105476 | orchestrator | Wednesday 18 February 2026 05:49:21 +0000 (0:00:02.970) 0:04:02.356 **** 2026-02-18 05:49:27.105490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:27.105590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:31.809345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:31.809472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:49:31.809524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809536 | orchestrator | 2026-02-18 05:49:31.809550 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-18 05:49:31.809563 | orchestrator | Wednesday 18 February 2026 05:49:27 +0000 (0:00:05.569) 0:04:07.926 **** 2026-02-18 05:49:31.809589 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:49:31.809602 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:49:31.809614 | orchestrator | } 2026-02-18 05:49:31.809625 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:49:31.809636 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:49:31.809648 | orchestrator | } 2026-02-18 05:49:31.809659 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:49:31.809670 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:49:31.809681 | orchestrator | } 2026-02-18 05:49:31.809742 | orchestrator | 2026-02-18 05:49:31.809754 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-18 05:49:31.809765 | orchestrator | Wednesday 18 February 2026 05:49:28 +0000 (0:00:01.405) 0:04:09.332 **** 2026-02-18 05:49:31.809777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-18 05:49:31.809931 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-18 05:51:01.558495 | orchestrator | 2026-02-18 05:51:01.558612 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-18 05:51:01.558630 | orchestrator | Wednesday 18 February 2026 05:49:31 +0000 (0:00:03.310) 0:04:12.643 **** 2026-02-18 05:51:01.558644 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-18 05:51:01.558656 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-18 05:51:01.558666 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-18 05:51:01.558677 | orchestrator | 2026-02-18 05:51:01.558689 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-18 05:51:01.558701 | orchestrator | Wednesday 18 February 2026 05:49:33 +0000 (0:00:02.185) 0:04:14.828 **** 2026-02-18 05:51:01.558712 | orchestrator | changed: [testbed-node-0] => { 2026-02-18 05:51:01.558724 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:51:01.558736 | orchestrator | } 2026-02-18 05:51:01.558747 | orchestrator | changed: [testbed-node-1] => { 2026-02-18 05:51:01.558758 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:51:01.558769 | orchestrator | } 2026-02-18 05:51:01.558783 | orchestrator | changed: [testbed-node-2] => { 2026-02-18 05:51:01.558802 | orchestrator |  "msg": "Notifying handlers" 2026-02-18 05:51:01.558821 | orchestrator | } 2026-02-18 05:51:01.558839 | orchestrator | 2026-02-18 05:51:01.558858 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 05:51:01.558876 | orchestrator | Wednesday 18 February 2026 05:49:35 +0000 (0:00:01.460) 0:04:16.289 **** 2026-02-18 05:51:01.558894 | orchestrator | 2026-02-18 05:51:01.558911 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 05:51:01.558929 | orchestrator | Wednesday 18 February 2026 05:49:35 +0000 (0:00:00.451) 0:04:16.741 **** 2026-02-18 05:51:01.558948 | orchestrator | 2026-02-18 05:51:01.558967 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-18 05:51:01.558985 | orchestrator | Wednesday 18 February 2026 05:49:36 +0000 (0:00:00.468) 0:04:17.209 **** 2026-02-18 05:51:01.559005 | orchestrator | 2026-02-18 05:51:01.559026 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-18 05:51:01.559046 | orchestrator | Wednesday 18 February 2026 05:49:37 +0000 (0:00:01.011) 0:04:18.221 **** 2026-02-18 05:51:01.559098 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:51:01.559118 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:51:01.559159 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:51:01.559178 | orchestrator | 2026-02-18 05:51:01.559192 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-18 05:51:01.559204 | orchestrator | Wednesday 18 February 2026 05:49:54 +0000 (0:00:16.877) 0:04:35.098 **** 2026-02-18 05:51:01.559215 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:51:01.559226 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:51:01.559238 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:51:01.559248 | orchestrator | 2026-02-18 05:51:01.559260 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-18 05:51:01.559270 | orchestrator | Wednesday 18 February 2026 05:50:11 +0000 (0:00:16.753) 0:04:51.852 **** 2026-02-18 05:51:01.559281 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-18 05:51:01.559292 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-18 05:51:01.559303 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-18 05:51:01.559314 | orchestrator | 2026-02-18 05:51:01.559325 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-18 05:51:01.559353 | orchestrator | Wednesday 18 February 2026 05:50:23 +0000 (0:00:12.293) 0:05:04.145 **** 2026-02-18 05:51:01.559365 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:51:01.559376 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:51:01.559387 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:51:01.559398 | orchestrator | 2026-02-18 05:51:01.559409 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-18 05:51:01.559420 | orchestrator | Wednesday 18 February 2026 05:50:40 +0000 (0:00:17.487) 0:05:21.633 **** 2026-02-18 05:51:01.559456 | orchestrator | Pausing for 5 seconds 2026-02-18 05:51:01.559468 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:51:01.559479 | orchestrator | 2026-02-18 05:51:01.559490 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-18 05:51:01.559501 | orchestrator | Wednesday 18 February 2026 05:50:46 +0000 (0:00:06.188) 0:05:27.821 **** 2026-02-18 05:51:01.559512 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:51:01.559523 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:51:01.559533 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:51:01.559544 | orchestrator | 2026-02-18 05:51:01.559555 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-18 05:51:01.559565 | orchestrator | Wednesday 18 February 2026 05:50:48 +0000 (0:00:01.839) 0:05:29.661 **** 2026-02-18 05:51:01.559576 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:51:01.559587 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:51:01.559598 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:51:01.559609 | orchestrator | 2026-02-18 05:51:01.559620 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-18 05:51:01.559631 | orchestrator | Wednesday 18 February 2026 05:50:50 +0000 (0:00:01.967) 0:05:31.629 **** 2026-02-18 05:51:01.559641 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:51:01.559652 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:51:01.559663 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:51:01.559674 | orchestrator | 2026-02-18 05:51:01.559684 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-18 05:51:01.559695 | orchestrator | Wednesday 18 February 2026 05:50:52 +0000 (0:00:01.865) 0:05:33.494 **** 2026-02-18 05:51:01.559705 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:51:01.559716 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:51:01.559727 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:51:01.559738 | orchestrator | 2026-02-18 05:51:01.559748 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-18 05:51:01.559759 | orchestrator | Wednesday 18 February 2026 05:50:54 +0000 (0:00:01.832) 0:05:35.327 **** 2026-02-18 05:51:01.559770 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:51:01.559781 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:51:01.559791 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:51:01.559802 | orchestrator | 2026-02-18 05:51:01.559813 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-18 05:51:01.559843 | orchestrator | Wednesday 18 February 2026 05:50:56 +0000 (0:00:01.831) 0:05:37.159 **** 2026-02-18 05:51:01.559854 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:51:01.559865 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:51:01.559876 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:51:01.559886 | orchestrator | 2026-02-18 05:51:01.559897 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-18 05:51:01.559908 | orchestrator | Wednesday 18 February 2026 05:50:58 +0000 (0:00:01.854) 0:05:39.013 **** 2026-02-18 05:51:01.559918 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-18 05:51:01.559929 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-18 05:51:01.559940 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-18 05:51:01.559951 | orchestrator | 2026-02-18 05:51:01.559962 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 05:51:01.559974 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-18 05:51:01.559987 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-18 05:51:01.559997 | orchestrator | testbed-node-2 : ok=49  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-18 05:51:01.560008 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:51:01.560027 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:51:01.560038 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 05:51:01.560071 | orchestrator | 2026-02-18 05:51:01.560083 | orchestrator | 2026-02-18 05:51:01.560094 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 05:51:01.560105 | orchestrator | Wednesday 18 February 2026 05:51:01 +0000 (0:00:02.968) 0:05:41.982 **** 2026-02-18 05:51:01.560116 | orchestrator | =============================================================================== 2026-02-18 05:51:01.560127 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.16s 2026-02-18 05:51:01.560138 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.18s 2026-02-18 05:51:01.560148 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.49s 2026-02-18 05:51:01.560159 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.88s 2026-02-18 05:51:01.560170 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.75s 2026-02-18 05:51:01.560180 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 12.29s 2026-02-18 05:51:01.560197 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.37s 2026-02-18 05:51:01.560208 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.19s 2026-02-18 05:51:01.560219 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.57s 2026-02-18 05:51:01.560229 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.27s 2026-02-18 05:51:01.560240 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.22s 2026-02-18 05:51:01.560251 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.68s 2026-02-18 05:51:01.560261 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.31s 2026-02-18 05:51:01.560272 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.25s 2026-02-18 05:51:01.560283 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.16s 2026-02-18 05:51:01.560293 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.03s 2026-02-18 05:51:01.560304 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.03s 2026-02-18 05:51:01.560315 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.97s 2026-02-18 05:51:01.560325 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.97s 2026-02-18 05:51:01.560336 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.96s 2026-02-18 05:51:01.881739 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-18 05:51:01.881838 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-18 05:51:01.881855 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-18 05:51:01.890097 | orchestrator | + set -e 2026-02-18 05:51:01.890145 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 05:51:01.890159 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 05:51:01.890172 | orchestrator | ++ INTERACTIVE=false 2026-02-18 05:51:01.890183 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 05:51:01.890281 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 05:51:01.890304 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-18 05:51:03.965352 | orchestrator | 2026-02-18 05:51:03 | INFO  | Task 1111b19d-7893-4b06-8be8-2c601096278c (ceph-rolling_update) was prepared for execution. 2026-02-18 05:51:03.965485 | orchestrator | 2026-02-18 05:51:03 | INFO  | It takes a moment until task 1111b19d-7893-4b06-8be8-2c601096278c (ceph-rolling_update) has been started and output is visible here. 2026-02-18 05:52:26.431464 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-18 05:52:26.431681 | orchestrator | 2.16.14 2026-02-18 05:52:26.431714 | orchestrator | 2026-02-18 05:52:26.431733 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-18 05:52:26.431751 | orchestrator | 2026-02-18 05:52:26.431763 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-18 05:52:26.431774 | orchestrator | Wednesday 18 February 2026 05:51:12 +0000 (0:00:01.415) 0:00:01.415 **** 2026-02-18 05:52:26.431786 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-18 05:52:26.431798 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-18 05:52:26.431810 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-18 05:52:26.431821 | orchestrator | skipping: [localhost] 2026-02-18 05:52:26.431832 | orchestrator | 2026-02-18 05:52:26.431843 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-18 05:52:26.431854 | orchestrator | 2026-02-18 05:52:26.431865 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-18 05:52:26.431875 | orchestrator | Wednesday 18 February 2026 05:51:14 +0000 (0:00:01.624) 0:00:03.039 **** 2026-02-18 05:52:26.431886 | orchestrator | ok: [testbed-node-0] => { 2026-02-18 05:52:26.431897 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-18 05:52:26.431908 | orchestrator | } 2026-02-18 05:52:26.431919 | orchestrator | ok: [testbed-node-1] => { 2026-02-18 05:52:26.431930 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-18 05:52:26.431941 | orchestrator | } 2026-02-18 05:52:26.431952 | orchestrator | ok: [testbed-node-2] => { 2026-02-18 05:52:26.431963 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-18 05:52:26.431981 | orchestrator | } 2026-02-18 05:52:26.432000 | orchestrator | ok: [testbed-node-3] => { 2026-02-18 05:52:26.432018 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-18 05:52:26.432038 | orchestrator | } 2026-02-18 05:52:26.432057 | orchestrator | ok: [testbed-node-4] => { 2026-02-18 05:52:26.432077 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-18 05:52:26.432097 | orchestrator | } 2026-02-18 05:52:26.432116 | orchestrator | ok: [testbed-node-5] => { 2026-02-18 05:52:26.432131 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-18 05:52:26.432142 | orchestrator | } 2026-02-18 05:52:26.432158 | orchestrator | ok: [testbed-manager] => { 2026-02-18 05:52:26.432176 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-18 05:52:26.432195 | orchestrator | } 2026-02-18 05:52:26.432212 | orchestrator | 2026-02-18 05:52:26.432223 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-18 05:52:26.432234 | orchestrator | Wednesday 18 February 2026 05:51:19 +0000 (0:00:04.953) 0:00:07.992 **** 2026-02-18 05:52:26.432245 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:26.432255 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:52:26.432266 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:52:26.432277 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:52:26.432288 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:52:26.432315 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:52:26.432337 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.432356 | orchestrator | 2026-02-18 05:52:26.432377 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-18 05:52:26.432397 | orchestrator | Wednesday 18 February 2026 05:51:24 +0000 (0:00:05.100) 0:00:13.093 **** 2026-02-18 05:52:26.432416 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 05:52:26.432435 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:52:26.432456 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:52:26.432518 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:52:26.432531 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 05:52:26.432542 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 05:52:26.432553 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 05:52:26.432564 | orchestrator | 2026-02-18 05:52:26.432575 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-18 05:52:26.432586 | orchestrator | Wednesday 18 February 2026 05:51:54 +0000 (0:00:30.198) 0:00:43.291 **** 2026-02-18 05:52:26.432597 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.432608 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.432619 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.432630 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.432640 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.432651 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.432662 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.432672 | orchestrator | 2026-02-18 05:52:26.432683 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 05:52:26.432694 | orchestrator | Wednesday 18 February 2026 05:51:56 +0000 (0:00:02.171) 0:00:45.463 **** 2026-02-18 05:52:26.432706 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-18 05:52:26.432724 | orchestrator | 2026-02-18 05:52:26.432743 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 05:52:26.432760 | orchestrator | Wednesday 18 February 2026 05:51:59 +0000 (0:00:02.911) 0:00:48.375 **** 2026-02-18 05:52:26.432779 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.432797 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.432817 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.432835 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.432855 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.432866 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.432877 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.432887 | orchestrator | 2026-02-18 05:52:26.432916 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 05:52:26.432928 | orchestrator | Wednesday 18 February 2026 05:52:02 +0000 (0:00:02.581) 0:00:50.956 **** 2026-02-18 05:52:26.432939 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.432949 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.432960 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.432971 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.432981 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.432992 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.433003 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.433013 | orchestrator | 2026-02-18 05:52:26.433025 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 05:52:26.433035 | orchestrator | Wednesday 18 February 2026 05:52:03 +0000 (0:00:01.900) 0:00:52.857 **** 2026-02-18 05:52:26.433046 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.433057 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.433068 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.433078 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.433097 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.433116 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.433133 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.433151 | orchestrator | 2026-02-18 05:52:26.433170 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 05:52:26.433189 | orchestrator | Wednesday 18 February 2026 05:52:06 +0000 (0:00:02.581) 0:00:55.438 **** 2026-02-18 05:52:26.433209 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.433227 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.433243 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.433254 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.433274 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.433284 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.433295 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.433306 | orchestrator | 2026-02-18 05:52:26.433316 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 05:52:26.433327 | orchestrator | Wednesday 18 February 2026 05:52:08 +0000 (0:00:02.055) 0:00:57.494 **** 2026-02-18 05:52:26.433337 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.433348 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.433358 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.433368 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.433379 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.433389 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.433400 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.433410 | orchestrator | 2026-02-18 05:52:26.433421 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 05:52:26.433572 | orchestrator | Wednesday 18 February 2026 05:52:10 +0000 (0:00:02.296) 0:00:59.790 **** 2026-02-18 05:52:26.433610 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.433622 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.433632 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.433643 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.433654 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.433664 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.433675 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.433686 | orchestrator | 2026-02-18 05:52:26.433697 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 05:52:26.433707 | orchestrator | Wednesday 18 February 2026 05:52:13 +0000 (0:00:02.135) 0:01:01.926 **** 2026-02-18 05:52:26.433718 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:26.433729 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:52:26.433739 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:52:26.433750 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:52:26.433760 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:52:26.433771 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:52:26.433786 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:52:26.433797 | orchestrator | 2026-02-18 05:52:26.433808 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 05:52:26.433818 | orchestrator | Wednesday 18 February 2026 05:52:15 +0000 (0:00:02.318) 0:01:04.244 **** 2026-02-18 05:52:26.433829 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.433846 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.433864 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.433882 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.433901 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.433919 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.433939 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.433958 | orchestrator | 2026-02-18 05:52:26.433976 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 05:52:26.433991 | orchestrator | Wednesday 18 February 2026 05:52:17 +0000 (0:00:02.056) 0:01:06.302 **** 2026-02-18 05:52:26.434002 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:52:26.434013 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:52:26.434109 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:52:26.434120 | orchestrator | 2026-02-18 05:52:26.434131 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 05:52:26.434142 | orchestrator | Wednesday 18 February 2026 05:52:19 +0000 (0:00:01.718) 0:01:08.021 **** 2026-02-18 05:52:26.434152 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:26.434163 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:26.434174 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:26.434184 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:26.434195 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:26.434224 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:26.434242 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:26.434260 | orchestrator | 2026-02-18 05:52:26.434279 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 05:52:26.434298 | orchestrator | Wednesday 18 February 2026 05:52:21 +0000 (0:00:02.497) 0:01:10.518 **** 2026-02-18 05:52:26.434319 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:52:26.434338 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:52:26.434352 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:52:26.434363 | orchestrator | 2026-02-18 05:52:26.434374 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 05:52:26.434385 | orchestrator | Wednesday 18 February 2026 05:52:24 +0000 (0:00:03.299) 0:01:13.818 **** 2026-02-18 05:52:26.434410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 05:52:50.502954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 05:52:50.503031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 05:52:50.503037 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503043 | orchestrator | 2026-02-18 05:52:50.503048 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 05:52:50.503054 | orchestrator | Wednesday 18 February 2026 05:52:26 +0000 (0:00:01.481) 0:01:15.300 **** 2026-02-18 05:52:50.503060 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 05:52:50.503067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 05:52:50.503071 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 05:52:50.503075 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503078 | orchestrator | 2026-02-18 05:52:50.503082 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 05:52:50.503086 | orchestrator | Wednesday 18 February 2026 05:52:28 +0000 (0:00:02.056) 0:01:17.356 **** 2026-02-18 05:52:50.503091 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 05:52:50.503098 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 05:52:50.503113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 05:52:50.503117 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503134 | orchestrator | 2026-02-18 05:52:50.503138 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 05:52:50.503142 | orchestrator | Wednesday 18 February 2026 05:52:29 +0000 (0:00:01.176) 0:01:18.533 **** 2026-02-18 05:52:50.503147 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '90866ac7d579', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 05:52:22.327601', 'end': '2026-02-18 05:52:22.386504', 'delta': '0:00:00.058903', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90866ac7d579'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 05:52:50.503163 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4c84206aa4db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 05:52:23.152527', 'end': '2026-02-18 05:52:23.211645', 'delta': '0:00:00.059118', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4c84206aa4db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 05:52:50.503167 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '11fb53bc1513', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 05:52:23.726420', 'end': '2026-02-18 05:52:23.783727', 'delta': '0:00:00.057307', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['11fb53bc1513'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 05:52:50.503171 | orchestrator | 2026-02-18 05:52:50.503175 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 05:52:50.503179 | orchestrator | Wednesday 18 February 2026 05:52:30 +0000 (0:00:01.266) 0:01:19.800 **** 2026-02-18 05:52:50.503182 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:50.503187 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:50.503191 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:50.503195 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:50.503198 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:50.503202 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:50.503206 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:50.503209 | orchestrator | 2026-02-18 05:52:50.503213 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 05:52:50.503217 | orchestrator | Wednesday 18 February 2026 05:52:33 +0000 (0:00:02.215) 0:01:22.015 **** 2026-02-18 05:52:50.503221 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503268 | orchestrator | 2026-02-18 05:52:50.503273 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 05:52:50.503277 | orchestrator | Wednesday 18 February 2026 05:52:34 +0000 (0:00:01.296) 0:01:23.312 **** 2026-02-18 05:52:50.503281 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:50.503284 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:50.503288 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:50.503292 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:50.503300 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:50.503303 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:50.503307 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:50.503311 | orchestrator | 2026-02-18 05:52:50.503315 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 05:52:50.503318 | orchestrator | Wednesday 18 February 2026 05:52:36 +0000 (0:00:02.304) 0:01:25.617 **** 2026-02-18 05:52:50.503322 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-18 05:52:50.503326 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:50.503330 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 05:52:50.503337 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 05:52:50.503341 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 05:52:50.503345 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-18 05:52:50.503349 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 05:52:50.503352 | orchestrator | 2026-02-18 05:52:50.503356 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 05:52:50.503360 | orchestrator | Wednesday 18 February 2026 05:52:41 +0000 (0:00:04.607) 0:01:30.224 **** 2026-02-18 05:52:50.503364 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:52:50.503367 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:52:50.503371 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:52:50.503375 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:52:50.503378 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:52:50.503382 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:52:50.503386 | orchestrator | ok: [testbed-manager] 2026-02-18 05:52:50.503390 | orchestrator | 2026-02-18 05:52:50.503393 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 05:52:50.503397 | orchestrator | Wednesday 18 February 2026 05:52:43 +0000 (0:00:02.195) 0:01:32.420 **** 2026-02-18 05:52:50.503401 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503405 | orchestrator | 2026-02-18 05:52:50.503408 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 05:52:50.503412 | orchestrator | Wednesday 18 February 2026 05:52:44 +0000 (0:00:01.139) 0:01:33.559 **** 2026-02-18 05:52:50.503416 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503420 | orchestrator | 2026-02-18 05:52:50.503423 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 05:52:50.503427 | orchestrator | Wednesday 18 February 2026 05:52:45 +0000 (0:00:01.300) 0:01:34.860 **** 2026-02-18 05:52:50.503431 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503435 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:52:50.503438 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:52:50.503442 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:52:50.503446 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:52:50.503450 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:52:50.503454 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:52:50.503458 | orchestrator | 2026-02-18 05:52:50.503461 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 05:52:50.503465 | orchestrator | Wednesday 18 February 2026 05:52:48 +0000 (0:00:02.388) 0:01:37.249 **** 2026-02-18 05:52:50.503469 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:52:50.503473 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:52:50.503476 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:52:50.503480 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:52:50.503484 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:52:50.503488 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:52:50.503495 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:01.335364 | orchestrator | 2026-02-18 05:53:01.335469 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 05:53:01.335480 | orchestrator | Wednesday 18 February 2026 05:52:50 +0000 (0:00:02.121) 0:01:39.370 **** 2026-02-18 05:53:01.335488 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:01.335518 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:01.335525 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:01.335532 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:01.335539 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:01.335546 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:01.335552 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:01.335604 | orchestrator | 2026-02-18 05:53:01.335613 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 05:53:01.335621 | orchestrator | Wednesday 18 February 2026 05:52:52 +0000 (0:00:02.173) 0:01:41.544 **** 2026-02-18 05:53:01.335628 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:01.335635 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:01.335693 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:01.335701 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:01.335708 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:01.335714 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:01.335721 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:01.335728 | orchestrator | 2026-02-18 05:53:01.335734 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 05:53:01.335741 | orchestrator | Wednesday 18 February 2026 05:52:54 +0000 (0:00:02.069) 0:01:43.613 **** 2026-02-18 05:53:01.335748 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:01.335755 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:01.335761 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:01.335768 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:01.335775 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:01.335782 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:01.335788 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:01.335795 | orchestrator | 2026-02-18 05:53:01.335801 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 05:53:01.335808 | orchestrator | Wednesday 18 February 2026 05:52:57 +0000 (0:00:02.345) 0:01:45.959 **** 2026-02-18 05:53:01.335815 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:01.335821 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:01.335828 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:01.335834 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:01.335841 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:01.335848 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:01.335854 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:01.335861 | orchestrator | 2026-02-18 05:53:01.335868 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 05:53:01.335876 | orchestrator | Wednesday 18 February 2026 05:52:58 +0000 (0:00:01.908) 0:01:47.867 **** 2026-02-18 05:53:01.335882 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:01.335889 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:01.335895 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:01.335902 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:01.335909 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:01.335915 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:01.335923 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:01.335931 | orchestrator | 2026-02-18 05:53:01.335938 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 05:53:01.335958 | orchestrator | Wednesday 18 February 2026 05:53:01 +0000 (0:00:02.189) 0:01:50.057 **** 2026-02-18 05:53:01.335968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.335980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.335995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.336020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:53:01.336030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.336038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.336047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.336062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:01.336080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.336094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674228 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:01.674348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:53:01.674441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '907e2eef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:01.674539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674576 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:01.674587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.674622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:53:01.674718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.960915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.961043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.961095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd638dc9f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:01.961149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.961163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.961175 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:01.961209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.961222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}})  2026-02-18 05:53:01.961236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:01.961295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}})  2026-02-18 05:53:01.961310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.961322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:01.961334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:53:01.961354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}})  2026-02-18 05:53:02.104487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}})  2026-02-18 05:53:02.104500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:02.104556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 05:53:02.104609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}})  2026-02-18 05:53:02.104629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:02.142910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}})  2026-02-18 05:53:02.143058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}})  2026-02-18 05:53:02.143085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:02.143110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:53:02.143138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}})  2026-02-18 05:53:02.143158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.143298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}})  2026-02-18 05:53:02.143331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:53:02.243528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}})  2026-02-18 05:53:02.243627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.243691 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:02.243723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.243738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 05:53:02.243771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:02.243805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.243817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.243835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}})  2026-02-18 05:53:02.243848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:02.243859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 05:53:02.243871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}})  2026-02-18 05:53:02.243889 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:02.243908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:03.507705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507741 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507753 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507823 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:03.507836 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-28-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:53:03.507868 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507879 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507890 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.507912 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3960e98d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part16', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part14', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part15', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part1', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:53:03.744932 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.745046 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:53:03.745063 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:03.745075 | orchestrator | 2026-02-18 05:53:03.745086 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 05:53:03.745098 | orchestrator | Wednesday 18 February 2026 05:53:03 +0000 (0:00:02.306) 0:01:52.364 **** 2026-02-18 05:53:03.745110 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745123 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745155 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745167 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745195 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745212 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745222 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745234 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.745262 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799777 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799875 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799889 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799919 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799931 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799976 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.799988 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.800002 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '907e2eef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.800021 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:03.800045 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.097933 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:04.098095 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098118 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098169 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098183 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098220 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098274 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098291 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd638dc9f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098313 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098325 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:04.098342 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.098363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296166 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:04.296181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296222 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.296378 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376214 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376336 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.376494 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513456 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513538 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513551 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:04.513566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513578 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.513615 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672223 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672285 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672298 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672329 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672361 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.672373 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:04.672396 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777622 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777643 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777746 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777779 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777854 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777867 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:04.777886 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-28-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.572950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573129 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573142 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:13.573156 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573169 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573204 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3960e98d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part16', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part14', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part15', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part1', 'scsi-SQEMU_QEMU_HARDDISK_3960e98d-77d9-4e0f-a638-1ea8be384186-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573227 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573239 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:53:13.573250 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:13.573261 | orchestrator | 2026-02-18 05:53:13.573274 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 05:53:13.573286 | orchestrator | Wednesday 18 February 2026 05:53:05 +0000 (0:00:02.438) 0:01:54.802 **** 2026-02-18 05:53:13.573298 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:53:13.573310 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:53:13.573320 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:53:13.573331 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:53:13.573341 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:53:13.573352 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:53:13.573363 | orchestrator | ok: [testbed-manager] 2026-02-18 05:53:13.573373 | orchestrator | 2026-02-18 05:53:13.573384 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 05:53:13.573395 | orchestrator | Wednesday 18 February 2026 05:53:08 +0000 (0:00:02.797) 0:01:57.599 **** 2026-02-18 05:53:13.573406 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:53:13.573416 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:53:13.573427 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:53:13.573437 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:53:13.573448 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:53:13.573458 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:53:13.573469 | orchestrator | ok: [testbed-manager] 2026-02-18 05:53:13.573482 | orchestrator | 2026-02-18 05:53:13.573495 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 05:53:13.573515 | orchestrator | Wednesday 18 February 2026 05:53:10 +0000 (0:00:02.121) 0:01:59.721 **** 2026-02-18 05:53:13.573528 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:53:13.573540 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:53:13.573552 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:53:13.573564 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:53:13.573576 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:13.573589 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:53:13.573600 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:53:13.573610 | orchestrator | 2026-02-18 05:53:13.573621 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 05:53:13.573639 | orchestrator | Wednesday 18 February 2026 05:53:13 +0000 (0:00:02.714) 0:02:02.436 **** 2026-02-18 05:53:44.869187 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:44.869290 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:44.869302 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:44.869311 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.869320 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:44.869329 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:44.869338 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:44.869348 | orchestrator | 2026-02-18 05:53:44.869359 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 05:53:44.869370 | orchestrator | Wednesday 18 February 2026 05:53:15 +0000 (0:00:02.175) 0:02:04.612 **** 2026-02-18 05:53:44.869379 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:44.869388 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:44.869397 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:44.869405 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.869414 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:44.869423 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:44.869432 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-18 05:53:44.869441 | orchestrator | 2026-02-18 05:53:44.869450 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 05:53:44.869459 | orchestrator | Wednesday 18 February 2026 05:53:18 +0000 (0:00:02.649) 0:02:07.261 **** 2026-02-18 05:53:44.869468 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:44.869477 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:44.869486 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:44.869495 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.869578 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:44.869593 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:44.869603 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:44.869611 | orchestrator | 2026-02-18 05:53:44.869620 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 05:53:44.869633 | orchestrator | Wednesday 18 February 2026 05:53:20 +0000 (0:00:01.990) 0:02:09.252 **** 2026-02-18 05:53:44.869643 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:53:44.869652 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-18 05:53:44.869661 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 05:53:44.869669 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-18 05:53:44.869678 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 05:53:44.869687 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 05:53:44.869696 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-18 05:53:44.869704 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-18 05:53:44.869713 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-18 05:53:44.869723 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-18 05:53:44.869732 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-18 05:53:44.869740 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 05:53:44.869749 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-18 05:53:44.869774 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-18 05:53:44.869783 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-18 05:53:44.869791 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-18 05:53:44.869800 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-18 05:53:44.869808 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-18 05:53:44.869817 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-18 05:53:44.869864 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-18 05:53:44.869874 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-18 05:53:44.869883 | orchestrator | 2026-02-18 05:53:44.869892 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 05:53:44.869900 | orchestrator | Wednesday 18 February 2026 05:53:23 +0000 (0:00:03.245) 0:02:12.497 **** 2026-02-18 05:53:44.869909 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 05:53:44.869918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 05:53:44.869927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 05:53:44.869935 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:44.869944 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 05:53:44.869952 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 05:53:44.869961 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 05:53:44.869969 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:44.869978 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 05:53:44.869986 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 05:53:44.869995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 05:53:44.870003 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:44.870012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 05:53:44.870098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 05:53:44.870108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 05:53:44.870116 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.870125 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 05:53:44.870134 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 05:53:44.870143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 05:53:44.870157 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 05:53:44.870172 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 05:53:44.870187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 05:53:44.870203 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:44.870219 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:44.870287 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-18 05:53:44.870298 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-18 05:53:44.870307 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-18 05:53:44.870316 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:44.870324 | orchestrator | 2026-02-18 05:53:44.870333 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 05:53:44.870342 | orchestrator | Wednesday 18 February 2026 05:53:26 +0000 (0:00:02.385) 0:02:14.883 **** 2026-02-18 05:53:44.870351 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:53:44.870359 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:53:44.870368 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:53:44.870377 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:53:44.870386 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:53:44.870395 | orchestrator | 2026-02-18 05:53:44.870405 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 05:53:44.870424 | orchestrator | Wednesday 18 February 2026 05:53:28 +0000 (0:00:02.274) 0:02:17.157 **** 2026-02-18 05:53:44.870433 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.870442 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:44.870451 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:44.870459 | orchestrator | 2026-02-18 05:53:44.870468 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 05:53:44.870477 | orchestrator | Wednesday 18 February 2026 05:53:29 +0000 (0:00:01.669) 0:02:18.827 **** 2026-02-18 05:53:44.870485 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.870494 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:44.870502 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:44.870511 | orchestrator | 2026-02-18 05:53:44.870525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 05:53:44.870534 | orchestrator | Wednesday 18 February 2026 05:53:31 +0000 (0:00:01.462) 0:02:20.289 **** 2026-02-18 05:53:44.870543 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.870551 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:53:44.870560 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:53:44.870569 | orchestrator | 2026-02-18 05:53:44.870577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 05:53:44.870586 | orchestrator | Wednesday 18 February 2026 05:53:32 +0000 (0:00:01.400) 0:02:21.689 **** 2026-02-18 05:53:44.870595 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:53:44.870604 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:53:44.870612 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:53:44.870621 | orchestrator | 2026-02-18 05:53:44.870630 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 05:53:44.870638 | orchestrator | Wednesday 18 February 2026 05:53:34 +0000 (0:00:01.443) 0:02:23.133 **** 2026-02-18 05:53:44.870647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 05:53:44.870655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 05:53:44.870664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 05:53:44.870672 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.870681 | orchestrator | 2026-02-18 05:53:44.870690 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 05:53:44.870699 | orchestrator | Wednesday 18 February 2026 05:53:36 +0000 (0:00:01.867) 0:02:25.001 **** 2026-02-18 05:53:44.870708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 05:53:44.870716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 05:53:44.870725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 05:53:44.870733 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.870742 | orchestrator | 2026-02-18 05:53:44.870751 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 05:53:44.870759 | orchestrator | Wednesday 18 February 2026 05:53:37 +0000 (0:00:01.815) 0:02:26.817 **** 2026-02-18 05:53:44.870768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 05:53:44.870776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 05:53:44.870785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 05:53:44.870794 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:53:44.870802 | orchestrator | 2026-02-18 05:53:44.870811 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 05:53:44.870820 | orchestrator | Wednesday 18 February 2026 05:53:39 +0000 (0:00:01.772) 0:02:28.589 **** 2026-02-18 05:53:44.870852 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:53:44.870862 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:53:44.870870 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:53:44.870879 | orchestrator | 2026-02-18 05:53:44.870888 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 05:53:44.870903 | orchestrator | Wednesday 18 February 2026 05:53:41 +0000 (0:00:01.432) 0:02:30.021 **** 2026-02-18 05:53:44.870912 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 05:53:44.870921 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 05:53:44.870930 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 05:53:44.870938 | orchestrator | 2026-02-18 05:53:44.870947 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 05:53:44.870956 | orchestrator | Wednesday 18 February 2026 05:53:42 +0000 (0:00:01.609) 0:02:31.631 **** 2026-02-18 05:53:44.870965 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:53:44.870973 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:53:44.870983 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:53:44.870991 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 05:53:44.871006 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 05:54:34.860822 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 05:54:34.860943 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 05:54:34.860959 | orchestrator | 2026-02-18 05:54:34.860972 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 05:54:34.860985 | orchestrator | Wednesday 18 February 2026 05:53:44 +0000 (0:00:02.095) 0:02:33.726 **** 2026-02-18 05:54:34.861020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:54:34.861100 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:54:34.861112 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:54:34.861124 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 05:54:34.861135 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 05:54:34.861146 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 05:54:34.861157 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 05:54:34.861167 | orchestrator | 2026-02-18 05:54:34.861179 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-18 05:54:34.861190 | orchestrator | Wednesday 18 February 2026 05:53:47 +0000 (0:00:03.012) 0:02:36.739 **** 2026-02-18 05:54:34.861201 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:54:34.861213 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:54:34.861225 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:54:34.861243 | orchestrator | changed: [testbed-manager] 2026-02-18 05:54:34.861277 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:54:34.861290 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:54:34.861301 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:54:34.861312 | orchestrator | 2026-02-18 05:54:34.861323 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-18 05:54:34.861334 | orchestrator | Wednesday 18 February 2026 05:53:58 +0000 (0:00:11.136) 0:02:47.876 **** 2026-02-18 05:54:34.861345 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.861358 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.861371 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.861383 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.861395 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.861408 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.861421 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.861433 | orchestrator | 2026-02-18 05:54:34.861446 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-18 05:54:34.861459 | orchestrator | Wednesday 18 February 2026 05:54:01 +0000 (0:00:02.161) 0:02:50.037 **** 2026-02-18 05:54:34.861498 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.861512 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.861524 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.861536 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.861548 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.861560 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.861572 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.861584 | orchestrator | 2026-02-18 05:54:34.861596 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-18 05:54:34.861608 | orchestrator | Wednesday 18 February 2026 05:54:03 +0000 (0:00:01.902) 0:02:51.940 **** 2026-02-18 05:54:34.861621 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.861633 | orchestrator | changed: [testbed-node-2] 2026-02-18 05:54:34.861645 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:54:34.861656 | orchestrator | changed: [testbed-node-1] 2026-02-18 05:54:34.861669 | orchestrator | changed: [testbed-node-3] 2026-02-18 05:54:34.861681 | orchestrator | changed: [testbed-node-4] 2026-02-18 05:54:34.861693 | orchestrator | changed: [testbed-node-5] 2026-02-18 05:54:34.861706 | orchestrator | 2026-02-18 05:54:34.861717 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-18 05:54:34.861728 | orchestrator | Wednesday 18 February 2026 05:54:06 +0000 (0:00:03.128) 0:02:55.068 **** 2026-02-18 05:54:34.861740 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-18 05:54:34.861752 | orchestrator | 2026-02-18 05:54:34.861763 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-18 05:54:34.861774 | orchestrator | Wednesday 18 February 2026 05:54:09 +0000 (0:00:03.055) 0:02:58.124 **** 2026-02-18 05:54:34.861785 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.861795 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.861806 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.861816 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.861827 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.861838 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.861848 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.861859 | orchestrator | 2026-02-18 05:54:34.861870 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-18 05:54:34.861881 | orchestrator | Wednesday 18 February 2026 05:54:11 +0000 (0:00:02.033) 0:03:00.157 **** 2026-02-18 05:54:34.861892 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.861902 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.861913 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.861924 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.861934 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.861945 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.861956 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.861966 | orchestrator | 2026-02-18 05:54:34.861977 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-18 05:54:34.861989 | orchestrator | Wednesday 18 February 2026 05:54:13 +0000 (0:00:02.281) 0:03:02.439 **** 2026-02-18 05:54:34.862000 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862106 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862123 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862134 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862145 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862155 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862166 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862177 | orchestrator | 2026-02-18 05:54:34.862188 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-18 05:54:34.862199 | orchestrator | Wednesday 18 February 2026 05:54:15 +0000 (0:00:02.110) 0:03:04.550 **** 2026-02-18 05:54:34.862220 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862231 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862242 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862253 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862264 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862274 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862285 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862296 | orchestrator | 2026-02-18 05:54:34.862307 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-18 05:54:34.862318 | orchestrator | Wednesday 18 February 2026 05:54:17 +0000 (0:00:02.204) 0:03:06.755 **** 2026-02-18 05:54:34.862329 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862339 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862350 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862361 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862371 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862382 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862393 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862404 | orchestrator | 2026-02-18 05:54:34.862415 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-18 05:54:34.862426 | orchestrator | Wednesday 18 February 2026 05:54:19 +0000 (0:00:02.007) 0:03:08.762 **** 2026-02-18 05:54:34.862443 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862454 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862465 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862475 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862486 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862497 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862507 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862518 | orchestrator | 2026-02-18 05:54:34.862530 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-18 05:54:34.862541 | orchestrator | Wednesday 18 February 2026 05:54:22 +0000 (0:00:02.136) 0:03:10.898 **** 2026-02-18 05:54:34.862552 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862562 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862573 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862583 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862594 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862605 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862616 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862626 | orchestrator | 2026-02-18 05:54:34.862637 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-18 05:54:34.862648 | orchestrator | Wednesday 18 February 2026 05:54:23 +0000 (0:00:01.952) 0:03:12.851 **** 2026-02-18 05:54:34.862659 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862670 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862680 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862691 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862702 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862712 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862723 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862734 | orchestrator | 2026-02-18 05:54:34.862745 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-18 05:54:34.862756 | orchestrator | Wednesday 18 February 2026 05:54:26 +0000 (0:00:02.231) 0:03:15.082 **** 2026-02-18 05:54:34.862767 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862778 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862788 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862799 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862810 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862820 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862831 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862849 | orchestrator | 2026-02-18 05:54:34.862860 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-18 05:54:34.862871 | orchestrator | Wednesday 18 February 2026 05:54:28 +0000 (0:00:02.091) 0:03:17.175 **** 2026-02-18 05:54:34.862882 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.862892 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.862903 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.862914 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.862925 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.862935 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.862946 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.862956 | orchestrator | 2026-02-18 05:54:34.862967 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-18 05:54:34.862978 | orchestrator | Wednesday 18 February 2026 05:54:30 +0000 (0:00:02.359) 0:03:19.534 **** 2026-02-18 05:54:34.862989 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.863000 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.863011 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.863022 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.863050 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.863061 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.863071 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.863082 | orchestrator | 2026-02-18 05:54:34.863093 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-18 05:54:34.863104 | orchestrator | Wednesday 18 February 2026 05:54:32 +0000 (0:00:02.276) 0:03:21.811 **** 2026-02-18 05:54:34.863115 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:34.863125 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:34.863136 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:34.863147 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:34.863158 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:34.863169 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:34.863179 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:34.863190 | orchestrator | 2026-02-18 05:54:34.863209 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-18 05:54:56.977717 | orchestrator | Wednesday 18 February 2026 05:54:34 +0000 (0:00:01.914) 0:03:23.725 **** 2026-02-18 05:54:56.977833 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.977859 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.977872 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.977884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 05:54:56.977897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 05:54:56.977909 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.977920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 05:54:56.977931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 05:54:56.977942 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.977953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 05:54:56.977982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 05:54:56.977994 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.978012 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978142 | orchestrator | 2026-02-18 05:54:56.978183 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-18 05:54:56.978196 | orchestrator | Wednesday 18 February 2026 05:54:37 +0000 (0:00:02.204) 0:03:25.929 **** 2026-02-18 05:54:56.978207 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.978218 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.978228 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.978239 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.978250 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.978262 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.978275 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978288 | orchestrator | 2026-02-18 05:54:56.978300 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-18 05:54:56.978312 | orchestrator | Wednesday 18 February 2026 05:54:39 +0000 (0:00:02.142) 0:03:28.072 **** 2026-02-18 05:54:56.978323 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.978333 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.978344 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.978354 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.978365 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.978376 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.978386 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978397 | orchestrator | 2026-02-18 05:54:56.978408 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-18 05:54:56.978419 | orchestrator | Wednesday 18 February 2026 05:54:41 +0000 (0:00:02.090) 0:03:30.162 **** 2026-02-18 05:54:56.978429 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.978440 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.978451 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.978461 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.978472 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.978484 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.978494 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978505 | orchestrator | 2026-02-18 05:54:56.978516 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-18 05:54:56.978526 | orchestrator | Wednesday 18 February 2026 05:54:43 +0000 (0:00:01.946) 0:03:32.109 **** 2026-02-18 05:54:56.978537 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.978548 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.978558 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.978569 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.978579 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.978590 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.978600 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978611 | orchestrator | 2026-02-18 05:54:56.978621 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-18 05:54:56.978632 | orchestrator | Wednesday 18 February 2026 05:54:45 +0000 (0:00:02.555) 0:03:34.665 **** 2026-02-18 05:54:56.978643 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.978653 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.978664 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.978675 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.978685 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.978696 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.978706 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978717 | orchestrator | 2026-02-18 05:54:56.978728 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-18 05:54:56.978739 | orchestrator | Wednesday 18 February 2026 05:54:47 +0000 (0:00:02.044) 0:03:36.710 **** 2026-02-18 05:54:56.978749 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.978760 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.978770 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.978781 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.978802 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.978813 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.978824 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978835 | orchestrator | 2026-02-18 05:54:56.978845 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-18 05:54:56.978856 | orchestrator | Wednesday 18 February 2026 05:54:49 +0000 (0:00:01.900) 0:03:38.611 **** 2026-02-18 05:54:56.978885 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:54:56.978897 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:54:56.978908 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:54:56.978918 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:54:56.978930 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:54:56.978941 | orchestrator | 2026-02-18 05:54:56.978952 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-18 05:54:56.978963 | orchestrator | Wednesday 18 February 2026 05:54:52 +0000 (0:00:02.552) 0:03:41.163 **** 2026-02-18 05:54:56.978974 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:54:56.978986 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:54:56.979018 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:54:56.979029 | orchestrator | 2026-02-18 05:54:56.979053 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-18 05:54:56.979064 | orchestrator | Wednesday 18 February 2026 05:54:53 +0000 (0:00:01.425) 0:03:42.589 **** 2026-02-18 05:54:56.979086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 05:54:56.979131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 05:54:56.979150 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.979182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 05:54:56.979194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 05:54:56.979205 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.979216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 05:54:56.979227 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 05:54:56.979238 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.979248 | orchestrator | 2026-02-18 05:54:56.979259 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-18 05:54:56.979270 | orchestrator | Wednesday 18 February 2026 05:54:55 +0000 (0:00:01.438) 0:03:44.028 **** 2026-02-18 05:54:56.979283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}, 'ansible_loop_var': 'item'})  2026-02-18 05:54:56.979296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}, 'ansible_loop_var': 'item'})  2026-02-18 05:54:56.979307 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:54:56.979319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}, 'ansible_loop_var': 'item'})  2026-02-18 05:54:56.979338 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}, 'ansible_loop_var': 'item'})  2026-02-18 05:54:56.979349 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:54:56.979360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}, 'ansible_loop_var': 'item'})  2026-02-18 05:54:56.979371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}, 'ansible_loop_var': 'item'})  2026-02-18 05:54:56.979382 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:54:56.979393 | orchestrator | 2026-02-18 05:54:56.979411 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-18 05:55:06.134738 | orchestrator | Wednesday 18 February 2026 05:54:56 +0000 (0:00:01.803) 0:03:45.832 **** 2026-02-18 05:55:06.134850 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:06.134866 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:06.134878 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:06.134890 | orchestrator | 2026-02-18 05:55:06.134902 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-18 05:55:06.134914 | orchestrator | Wednesday 18 February 2026 05:54:58 +0000 (0:00:01.403) 0:03:47.235 **** 2026-02-18 05:55:06.134925 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:06.134936 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:06.134947 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:06.134958 | orchestrator | 2026-02-18 05:55:06.134969 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-18 05:55:06.134982 | orchestrator | Wednesday 18 February 2026 05:54:59 +0000 (0:00:01.425) 0:03:48.661 **** 2026-02-18 05:55:06.134993 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:06.135004 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:06.135015 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:06.135026 | orchestrator | 2026-02-18 05:55:06.135038 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-18 05:55:06.135049 | orchestrator | Wednesday 18 February 2026 05:55:01 +0000 (0:00:01.347) 0:03:50.008 **** 2026-02-18 05:55:06.135060 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:06.135071 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:06.135082 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:06.135093 | orchestrator | 2026-02-18 05:55:06.135105 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-18 05:55:06.135132 | orchestrator | Wednesday 18 February 2026 05:55:02 +0000 (0:00:01.396) 0:03:51.405 **** 2026-02-18 05:55:06.135181 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}) 2026-02-18 05:55:06.135196 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}) 2026-02-18 05:55:06.135208 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}) 2026-02-18 05:55:06.135219 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}) 2026-02-18 05:55:06.135250 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}) 2026-02-18 05:55:06.135262 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}) 2026-02-18 05:55:06.135277 | orchestrator | 2026-02-18 05:55:06.135295 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-18 05:55:06.135316 | orchestrator | Wednesday 18 February 2026 05:55:04 +0000 (0:00:02.151) 0:03:53.556 **** 2026-02-18 05:55:06.135340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31/osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1771386327.5574973, 'mtime': 1771386327.550497, 'ctime': 1771386327.550497, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31/osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:06.135390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-c707e11d-d3db-5907-b25a-51e31fa350e2/osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1771386346.158785, 'mtime': 1771386346.152785, 'ctime': 1771386346.152785, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-c707e11d-d3db-5907-b25a-51e31fa350e2/osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:06.135414 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:06.135445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8ef111f9-34b8-55e5-9a40-00a35805e906/osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1771386327.6152499, 'mtime': 1771386327.6082497, 'ctime': 1771386327.6082497, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8ef111f9-34b8-55e5-9a40-00a35805e906/osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:06.135470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1/osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1771386348.4035618, 'mtime': 1771386348.3965619, 'ctime': 1771386348.3965619, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1/osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:06.135484 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:06.135508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3/osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1771386326.2586856, 'mtime': 1771386326.2526855, 'ctime': 1771386326.2526855, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3/osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123095 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72/osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1771386349.2110548, 'mtime': 1771386349.2050548, 'ctime': 1771386349.2050548, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72/osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123152 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:12.123158 | orchestrator | 2026-02-18 05:55:12.123162 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-18 05:55:12.123176 | orchestrator | Wednesday 18 February 2026 05:55:06 +0000 (0:00:01.450) 0:03:55.007 **** 2026-02-18 05:55:12.123180 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 05:55:12.123184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 05:55:12.123187 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:12.123191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 05:55:12.123194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 05:55:12.123197 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:12.123200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 05:55:12.123203 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 05:55:12.123206 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:12.123209 | orchestrator | 2026-02-18 05:55:12.123213 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-18 05:55:12.123216 | orchestrator | Wednesday 18 February 2026 05:55:07 +0000 (0:00:01.377) 0:03:56.385 **** 2026-02-18 05:55:12.123220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123228 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:12.123231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123248 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:12.123255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123258 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123261 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:12.123265 | orchestrator | 2026-02-18 05:55:12.123268 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-18 05:55:12.123271 | orchestrator | Wednesday 18 February 2026 05:55:08 +0000 (0:00:01.451) 0:03:57.837 **** 2026-02-18 05:55:12.123274 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'})  2026-02-18 05:55:12.123278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'})  2026-02-18 05:55:12.123281 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:12.123284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'})  2026-02-18 05:55:12.123287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'})  2026-02-18 05:55:12.123290 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:12.123293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'})  2026-02-18 05:55:12.123296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'})  2026-02-18 05:55:12.123299 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:12.123302 | orchestrator | 2026-02-18 05:55:12.123305 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-18 05:55:12.123309 | orchestrator | Wednesday 18 February 2026 05:55:10 +0000 (0:00:01.779) 0:03:59.616 **** 2026-02-18 05:55:12.123312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31', 'data_vg': 'ceph-62ce64d1-56ba-5b5c-b13c-8c9d2c247f31'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-c707e11d-d3db-5907-b25a-51e31fa350e2', 'data_vg': 'ceph-c707e11d-d3db-5907-b25a-51e31fa350e2'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123318 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:12.123322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8ef111f9-34b8-55e5-9a40-00a35805e906', 'data_vg': 'ceph-8ef111f9-34b8-55e5-9a40-00a35805e906'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-47b33137-1c4f-52d4-af64-ebc2c48f95b1', 'data_vg': 'ceph-47b33137-1c4f-52d4-af64-ebc2c48f95b1'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123330 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:12.123333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-b4fe298a-487d-5630-bf9a-8376c13eb8c3', 'data_vg': 'ceph-b4fe298a-487d-5630-bf9a-8376c13eb8c3'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:12.123339 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72', 'data_vg': 'ceph-a3fa5e2b-5aa1-58af-bddd-1734a40d2e72'}, 'ansible_loop_var': 'item'})  2026-02-18 05:55:21.597682 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:21.597812 | orchestrator | 2026-02-18 05:55:21.597837 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-18 05:55:21.597858 | orchestrator | Wednesday 18 February 2026 05:55:12 +0000 (0:00:01.374) 0:04:00.991 **** 2026-02-18 05:55:21.597897 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:21.597917 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:21.597936 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:21.597955 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:21.597973 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:21.597991 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:21.598009 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:21.598113 | orchestrator | 2026-02-18 05:55:21.598136 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-18 05:55:21.598156 | orchestrator | Wednesday 18 February 2026 05:55:13 +0000 (0:00:01.742) 0:04:02.733 **** 2026-02-18 05:55:21.598176 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:21.598225 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:21.598253 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:21.598276 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:21.598296 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 05:55:21.598316 | orchestrator | 2026-02-18 05:55:21.598337 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-18 05:55:21.598357 | orchestrator | Wednesday 18 February 2026 05:55:16 +0000 (0:00:02.645) 0:04:05.379 **** 2026-02-18 05:55:21.598379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598508 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:21.598528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598620 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:21.598640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598738 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:21.598759 | orchestrator | 2026-02-18 05:55:21.598780 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-18 05:55:21.598799 | orchestrator | Wednesday 18 February 2026 05:55:17 +0000 (0:00:01.400) 0:04:06.779 **** 2026-02-18 05:55:21.598819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.598948 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:21.598969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599085 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:21.599105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599262 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:21.599284 | orchestrator | 2026-02-18 05:55:21.599303 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-18 05:55:21.599323 | orchestrator | Wednesday 18 February 2026 05:55:19 +0000 (0:00:01.701) 0:04:08.481 **** 2026-02-18 05:55:21.599342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599440 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:21.599460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599553 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:21.599571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 05:55:21.599667 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:21.599686 | orchestrator | 2026-02-18 05:55:21.599705 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-18 05:55:21.599723 | orchestrator | Wednesday 18 February 2026 05:55:21 +0000 (0:00:01.545) 0:04:10.026 **** 2026-02-18 05:55:21.599741 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:21.599759 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:21.599793 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.276841 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:38.276955 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:38.276972 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:38.276984 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:38.276995 | orchestrator | 2026-02-18 05:55:38.277025 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-18 05:55:38.277039 | orchestrator | Wednesday 18 February 2026 05:55:23 +0000 (0:00:01.946) 0:04:11.973 **** 2026-02-18 05:55:38.277071 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:38.277083 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:38.277094 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.277105 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:38.277115 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:38.277126 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:38.277137 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:38.277147 | orchestrator | 2026-02-18 05:55:38.277159 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-18 05:55:38.277170 | orchestrator | Wednesday 18 February 2026 05:55:25 +0000 (0:00:02.187) 0:04:14.161 **** 2026-02-18 05:55:38.277181 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:38.277191 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:38.277202 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.277213 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:38.277224 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:38.277234 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:38.277245 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:38.277312 | orchestrator | 2026-02-18 05:55:38.277334 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-18 05:55:38.277351 | orchestrator | Wednesday 18 February 2026 05:55:27 +0000 (0:00:02.280) 0:04:16.441 **** 2026-02-18 05:55:38.277367 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:38.277384 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:38.277402 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.277419 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:38.277435 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:38.277453 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:38.277471 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:38.277489 | orchestrator | 2026-02-18 05:55:38.277507 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-18 05:55:38.277528 | orchestrator | Wednesday 18 February 2026 05:55:29 +0000 (0:00:02.074) 0:04:18.515 **** 2026-02-18 05:55:38.277545 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:38.277564 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:38.277582 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.277601 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:38.277619 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:38.277635 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:38.277654 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:38.277672 | orchestrator | 2026-02-18 05:55:38.277691 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-18 05:55:38.277712 | orchestrator | Wednesday 18 February 2026 05:55:31 +0000 (0:00:02.239) 0:04:20.755 **** 2026-02-18 05:55:38.277731 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:38.277750 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:38.277762 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.277773 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:38.277783 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:38.277794 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:38.277804 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:38.277815 | orchestrator | 2026-02-18 05:55:38.277825 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-18 05:55:38.277836 | orchestrator | Wednesday 18 February 2026 05:55:34 +0000 (0:00:02.600) 0:04:23.355 **** 2026-02-18 05:55:38.277847 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:38.277857 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:38.277868 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.277878 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:38.277889 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:38.277899 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:38.277909 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:38.277935 | orchestrator | 2026-02-18 05:55:38.277946 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-18 05:55:38.277957 | orchestrator | Wednesday 18 February 2026 05:55:37 +0000 (0:00:02.587) 0:04:25.942 **** 2026-02-18 05:55:38.277969 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:38.277981 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:38.277993 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:38.278005 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:38.278073 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:38.278088 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:38.278099 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:38.278131 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:38.278152 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:38.278163 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:38.278174 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:38.278185 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:38.278196 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:38.278207 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:38.278218 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:38.278229 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:38.278240 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:38.278251 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:38.278285 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:38.278297 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:38.278316 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:38.278327 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:38.278339 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:38.278349 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:38.278360 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:38.278371 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:38.278396 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:38.278418 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:38.278429 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:38.278440 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:38.278459 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:43.246711 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:43.246816 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:43.246833 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:43.246846 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:43.246859 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:43.246872 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:43.246883 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:43.246894 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:43.246905 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:43.246940 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:43.246952 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:43.246963 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:43.246974 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:43.246985 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:43.246996 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:43.247007 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:43.247018 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:43.247029 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:43.247040 | orchestrator | 2026-02-18 05:55:43.247052 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-18 05:55:43.247065 | orchestrator | Wednesday 18 February 2026 05:55:39 +0000 (0:00:02.706) 0:04:28.649 **** 2026-02-18 05:55:43.247076 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:43.247087 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:43.247097 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:43.247108 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:55:43.247119 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:55:43.247130 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:55:43.247141 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:55:43.247151 | orchestrator | 2026-02-18 05:55:43.247162 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-18 05:55:43.247174 | orchestrator | Wednesday 18 February 2026 05:55:42 +0000 (0:00:02.285) 0:04:30.935 **** 2026-02-18 05:55:43.247185 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:43.247196 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:43.247207 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:43.247218 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:43.247253 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:43.247268 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:43.247304 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:55:43.247316 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:43.247337 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:43.247350 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:43.247363 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:43.247375 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:43.247388 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:43.247400 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:55:43.247413 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:43.247425 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:43.247438 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:43.247450 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:43.247462 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:55:43.247474 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:55:43.247487 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:55:43.247499 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:43.247511 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:43.247523 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:55:43.247535 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:43.247547 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:55:43.247560 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:55:43.247572 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:55:43.247592 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:56:13.280980 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:56:13.281095 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:56:13.281112 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:56:13.281124 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:56:13.281136 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.281148 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:56:13.281159 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.281169 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:56:13.281180 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-18 05:56:13.281191 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-18 05:56:13.281203 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:56:13.281215 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:56:13.281227 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:56:13.281238 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:56:13.281249 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.281260 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-18 05:56:13.281271 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-18 05:56:13.281281 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-18 05:56:13.281292 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-18 05:56:13.281303 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.281314 | orchestrator | 2026-02-18 05:56:13.281326 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-18 05:56:13.281339 | orchestrator | Wednesday 18 February 2026 05:55:44 +0000 (0:00:02.653) 0:04:33.589 **** 2026-02-18 05:56:13.281437 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:13.281452 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:56:13.281478 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:56:13.281489 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.281512 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.281523 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.281534 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.281544 | orchestrator | 2026-02-18 05:56:13.281555 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-18 05:56:13.281566 | orchestrator | Wednesday 18 February 2026 05:55:46 +0000 (0:00:02.071) 0:04:35.661 **** 2026-02-18 05:56:13.281577 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:13.281588 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:56:13.281598 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:56:13.281609 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.281620 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.281630 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.281641 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.281651 | orchestrator | 2026-02-18 05:56:13.281679 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-18 05:56:13.281698 | orchestrator | Wednesday 18 February 2026 05:55:49 +0000 (0:00:02.264) 0:04:37.925 **** 2026-02-18 05:56:13.281710 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:13.281721 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:56:13.281731 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:56:13.281742 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.281753 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.281763 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.281774 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.281785 | orchestrator | 2026-02-18 05:56:13.281795 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-18 05:56:13.281806 | orchestrator | Wednesday 18 February 2026 05:55:51 +0000 (0:00:02.609) 0:04:40.534 **** 2026-02-18 05:56:13.281817 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-18 05:56:13.281830 | orchestrator | 2026-02-18 05:56:13.281841 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-18 05:56:13.281852 | orchestrator | Wednesday 18 February 2026 05:55:54 +0000 (0:00:02.819) 0:04:43.354 **** 2026-02-18 05:56:13.281863 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-18 05:56:13.281874 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-18 05:56:13.281885 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-18 05:56:13.281895 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-18 05:56:13.281906 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-18 05:56:13.281917 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-18 05:56:13.281927 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-18 05:56:13.281938 | orchestrator | 2026-02-18 05:56:13.281949 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-18 05:56:13.281960 | orchestrator | Wednesday 18 February 2026 05:55:56 +0000 (0:00:02.142) 0:04:45.496 **** 2026-02-18 05:56:13.281971 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:13.281981 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:56:13.281993 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:56:13.282004 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.282069 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.282091 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.282102 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.282113 | orchestrator | 2026-02-18 05:56:13.282124 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-18 05:56:13.282135 | orchestrator | Wednesday 18 February 2026 05:55:58 +0000 (0:00:02.183) 0:04:47.680 **** 2026-02-18 05:56:13.282146 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:13.282157 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:56:13.282167 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:56:13.282178 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.282189 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.282199 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.282210 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.282221 | orchestrator | 2026-02-18 05:56:13.282232 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-18 05:56:13.282243 | orchestrator | Wednesday 18 February 2026 05:56:01 +0000 (0:00:02.318) 0:04:49.999 **** 2026-02-18 05:56:13.282254 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:13.282266 | orchestrator | ok: [testbed-node-1] 2026-02-18 05:56:13.282276 | orchestrator | ok: [testbed-node-2] 2026-02-18 05:56:13.282287 | orchestrator | ok: [testbed-node-3] 2026-02-18 05:56:13.282298 | orchestrator | ok: [testbed-node-4] 2026-02-18 05:56:13.282308 | orchestrator | ok: [testbed-node-5] 2026-02-18 05:56:13.282319 | orchestrator | ok: [testbed-manager] 2026-02-18 05:56:13.282329 | orchestrator | 2026-02-18 05:56:13.282340 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-18 05:56:13.282351 | orchestrator | Wednesday 18 February 2026 05:56:03 +0000 (0:00:02.578) 0:04:52.578 **** 2026-02-18 05:56:13.282362 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:13.282373 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:56:13.282419 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:56:13.282439 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.282457 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.282472 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.282482 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.282493 | orchestrator | 2026-02-18 05:56:13.282504 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-18 05:56:13.282515 | orchestrator | Wednesday 18 February 2026 05:56:06 +0000 (0:00:02.481) 0:04:55.059 **** 2026-02-18 05:56:13.282526 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:13.282536 | orchestrator | skipping: [testbed-node-1] 2026-02-18 05:56:13.282547 | orchestrator | skipping: [testbed-node-2] 2026-02-18 05:56:13.282557 | orchestrator | skipping: [testbed-node-3] 2026-02-18 05:56:13.282568 | orchestrator | skipping: [testbed-node-4] 2026-02-18 05:56:13.282578 | orchestrator | skipping: [testbed-node-5] 2026-02-18 05:56:13.282589 | orchestrator | skipping: [testbed-manager] 2026-02-18 05:56:13.282599 | orchestrator | 2026-02-18 05:56:13.282610 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-18 05:56:13.282621 | orchestrator | Wednesday 18 February 2026 05:56:08 +0000 (0:00:02.469) 0:04:57.529 **** 2026-02-18 05:56:13.282631 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:13.282642 | orchestrator | 2026-02-18 05:56:13.282653 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-18 05:56:13.282664 | orchestrator | Wednesday 18 February 2026 05:56:11 +0000 (0:00:02.601) 0:05:00.131 **** 2026-02-18 05:56:13.282683 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:52.990010 | orchestrator | 2026-02-18 05:56:52.990224 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-18 05:56:52.990255 | orchestrator | 2026-02-18 05:56:52.990288 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 05:56:52.990313 | orchestrator | Wednesday 18 February 2026 05:56:13 +0000 (0:00:02.018) 0:05:02.149 **** 2026-02-18 05:56:52.990330 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.990350 | orchestrator | 2026-02-18 05:56:52.990394 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 05:56:52.990414 | orchestrator | Wednesday 18 February 2026 05:56:14 +0000 (0:00:01.522) 0:05:03.672 **** 2026-02-18 05:56:52.990426 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.990437 | orchestrator | 2026-02-18 05:56:52.990447 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-18 05:56:52.990458 | orchestrator | Wednesday 18 February 2026 05:56:15 +0000 (0:00:01.152) 0:05:04.824 **** 2026-02-18 05:56:52.990471 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-18 05:56:52.990484 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-18 05:56:52.990495 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-18 05:56:52.990531 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-18 05:56:52.990546 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-18 05:56:52.990560 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}])  2026-02-18 05:56:52.990574 | orchestrator | 2026-02-18 05:56:52.990587 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-18 05:56:52.990600 | orchestrator | 2026-02-18 05:56:52.990612 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-18 05:56:52.990624 | orchestrator | Wednesday 18 February 2026 05:56:26 +0000 (0:00:10.072) 0:05:14.897 **** 2026-02-18 05:56:52.990637 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.990648 | orchestrator | 2026-02-18 05:56:52.990661 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-18 05:56:52.990673 | orchestrator | Wednesday 18 February 2026 05:56:27 +0000 (0:00:01.484) 0:05:16.381 **** 2026-02-18 05:56:52.990685 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.990698 | orchestrator | 2026-02-18 05:56:52.990710 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-18 05:56:52.990723 | orchestrator | Wednesday 18 February 2026 05:56:28 +0000 (0:00:01.206) 0:05:17.587 **** 2026-02-18 05:56:52.990735 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:52.990756 | orchestrator | 2026-02-18 05:56:52.990769 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-18 05:56:52.990782 | orchestrator | Wednesday 18 February 2026 05:56:29 +0000 (0:00:01.130) 0:05:18.718 **** 2026-02-18 05:56:52.990794 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.990806 | orchestrator | 2026-02-18 05:56:52.990817 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 05:56:52.990827 | orchestrator | Wednesday 18 February 2026 05:56:30 +0000 (0:00:01.155) 0:05:19.873 **** 2026-02-18 05:56:52.990838 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-18 05:56:52.990849 | orchestrator | 2026-02-18 05:56:52.990879 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 05:56:52.990896 | orchestrator | Wednesday 18 February 2026 05:56:32 +0000 (0:00:01.130) 0:05:21.004 **** 2026-02-18 05:56:52.990908 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.990919 | orchestrator | 2026-02-18 05:56:52.990929 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 05:56:52.990940 | orchestrator | Wednesday 18 February 2026 05:56:33 +0000 (0:00:01.481) 0:05:22.485 **** 2026-02-18 05:56:52.990951 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.990962 | orchestrator | 2026-02-18 05:56:52.990973 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 05:56:52.990984 | orchestrator | Wednesday 18 February 2026 05:56:34 +0000 (0:00:01.144) 0:05:23.630 **** 2026-02-18 05:56:52.990994 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.991005 | orchestrator | 2026-02-18 05:56:52.991016 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 05:56:52.991027 | orchestrator | Wednesday 18 February 2026 05:56:36 +0000 (0:00:01.500) 0:05:25.130 **** 2026-02-18 05:56:52.991038 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.991049 | orchestrator | 2026-02-18 05:56:52.991060 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 05:56:52.991071 | orchestrator | Wednesday 18 February 2026 05:56:37 +0000 (0:00:01.135) 0:05:26.266 **** 2026-02-18 05:56:52.991082 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.991093 | orchestrator | 2026-02-18 05:56:52.991104 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 05:56:52.991114 | orchestrator | Wednesday 18 February 2026 05:56:38 +0000 (0:00:01.222) 0:05:27.488 **** 2026-02-18 05:56:52.991125 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.991136 | orchestrator | 2026-02-18 05:56:52.991147 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 05:56:52.991159 | orchestrator | Wednesday 18 February 2026 05:56:39 +0000 (0:00:01.298) 0:05:28.787 **** 2026-02-18 05:56:52.991170 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:52.991181 | orchestrator | 2026-02-18 05:56:52.991192 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 05:56:52.991203 | orchestrator | Wednesday 18 February 2026 05:56:41 +0000 (0:00:01.161) 0:05:29.949 **** 2026-02-18 05:56:52.991214 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.991225 | orchestrator | 2026-02-18 05:56:52.991236 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 05:56:52.991246 | orchestrator | Wednesday 18 February 2026 05:56:42 +0000 (0:00:01.136) 0:05:31.085 **** 2026-02-18 05:56:52.991257 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:56:52.991268 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:56:52.991279 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:56:52.991290 | orchestrator | 2026-02-18 05:56:52.991301 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 05:56:52.991312 | orchestrator | Wednesday 18 February 2026 05:56:43 +0000 (0:00:01.669) 0:05:32.755 **** 2026-02-18 05:56:52.991323 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:56:52.991334 | orchestrator | 2026-02-18 05:56:52.991351 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 05:56:52.991362 | orchestrator | Wednesday 18 February 2026 05:56:45 +0000 (0:00:01.225) 0:05:33.981 **** 2026-02-18 05:56:52.991373 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:56:52.991384 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:56:52.991394 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:56:52.991405 | orchestrator | 2026-02-18 05:56:52.991416 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 05:56:52.991427 | orchestrator | Wednesday 18 February 2026 05:56:48 +0000 (0:00:03.169) 0:05:37.150 **** 2026-02-18 05:56:52.991438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 05:56:52.991449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 05:56:52.991460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 05:56:52.991471 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:52.991482 | orchestrator | 2026-02-18 05:56:52.991493 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 05:56:52.991524 | orchestrator | Wednesday 18 February 2026 05:56:49 +0000 (0:00:01.525) 0:05:38.676 **** 2026-02-18 05:56:52.991538 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 05:56:52.991552 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 05:56:52.991563 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 05:56:52.991575 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:56:52.991585 | orchestrator | 2026-02-18 05:56:52.991596 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 05:56:52.991607 | orchestrator | Wednesday 18 February 2026 05:56:51 +0000 (0:00:01.962) 0:05:40.638 **** 2026-02-18 05:56:52.991631 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:13.606510 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:13.606663 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:13.606682 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.606697 | orchestrator | 2026-02-18 05:57:13.606710 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 05:57:13.606753 | orchestrator | Wednesday 18 February 2026 05:56:52 +0000 (0:00:01.218) 0:05:41.856 **** 2026-02-18 05:57:13.606774 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '90866ac7d579', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 05:56:45.644890', 'end': '2026-02-18 05:56:45.688612', 'delta': '0:00:00.043722', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90866ac7d579'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 05:57:13.606797 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4c84206aa4db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 05:56:46.213427', 'end': '2026-02-18 05:56:46.260956', 'delta': '0:00:00.047529', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4c84206aa4db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 05:57:13.606818 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '11fb53bc1513', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 05:56:47.089044', 'end': '2026-02-18 05:56:47.125095', 'delta': '0:00:00.036051', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['11fb53bc1513'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 05:57:13.606839 | orchestrator | 2026-02-18 05:57:13.606858 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 05:57:13.606877 | orchestrator | Wednesday 18 February 2026 05:56:54 +0000 (0:00:01.248) 0:05:43.105 **** 2026-02-18 05:57:13.606888 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:57:13.606900 | orchestrator | 2026-02-18 05:57:13.606911 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 05:57:13.606922 | orchestrator | Wednesday 18 February 2026 05:56:55 +0000 (0:00:01.365) 0:05:44.470 **** 2026-02-18 05:57:13.606932 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.606943 | orchestrator | 2026-02-18 05:57:13.606954 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 05:57:13.606980 | orchestrator | Wednesday 18 February 2026 05:56:56 +0000 (0:00:01.247) 0:05:45.718 **** 2026-02-18 05:57:13.606991 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:57:13.607001 | orchestrator | 2026-02-18 05:57:13.607012 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 05:57:13.607024 | orchestrator | Wednesday 18 February 2026 05:56:58 +0000 (0:00:01.259) 0:05:46.978 **** 2026-02-18 05:57:13.607054 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-18 05:57:13.607068 | orchestrator | 2026-02-18 05:57:13.607081 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 05:57:13.607093 | orchestrator | Wednesday 18 February 2026 05:57:00 +0000 (0:00:02.512) 0:05:49.490 **** 2026-02-18 05:57:13.607105 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:57:13.607117 | orchestrator | 2026-02-18 05:57:13.607138 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 05:57:13.607151 | orchestrator | Wednesday 18 February 2026 05:57:01 +0000 (0:00:01.162) 0:05:50.652 **** 2026-02-18 05:57:13.607162 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607175 | orchestrator | 2026-02-18 05:57:13.607187 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 05:57:13.607200 | orchestrator | Wednesday 18 February 2026 05:57:02 +0000 (0:00:01.153) 0:05:51.805 **** 2026-02-18 05:57:13.607212 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607225 | orchestrator | 2026-02-18 05:57:13.607238 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 05:57:13.607250 | orchestrator | Wednesday 18 February 2026 05:57:04 +0000 (0:00:01.242) 0:05:53.048 **** 2026-02-18 05:57:13.607263 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607275 | orchestrator | 2026-02-18 05:57:13.607288 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 05:57:13.607302 | orchestrator | Wednesday 18 February 2026 05:57:05 +0000 (0:00:01.124) 0:05:54.172 **** 2026-02-18 05:57:13.607314 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607326 | orchestrator | 2026-02-18 05:57:13.607337 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 05:57:13.607348 | orchestrator | Wednesday 18 February 2026 05:57:06 +0000 (0:00:01.230) 0:05:55.403 **** 2026-02-18 05:57:13.607359 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607371 | orchestrator | 2026-02-18 05:57:13.607389 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 05:57:13.607416 | orchestrator | Wednesday 18 February 2026 05:57:07 +0000 (0:00:01.135) 0:05:56.538 **** 2026-02-18 05:57:13.607437 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607455 | orchestrator | 2026-02-18 05:57:13.607473 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 05:57:13.607491 | orchestrator | Wednesday 18 February 2026 05:57:08 +0000 (0:00:01.155) 0:05:57.694 **** 2026-02-18 05:57:13.607508 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607526 | orchestrator | 2026-02-18 05:57:13.607543 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 05:57:13.607560 | orchestrator | Wednesday 18 February 2026 05:57:10 +0000 (0:00:01.208) 0:05:58.903 **** 2026-02-18 05:57:13.607623 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607641 | orchestrator | 2026-02-18 05:57:13.607653 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 05:57:13.607665 | orchestrator | Wednesday 18 February 2026 05:57:11 +0000 (0:00:01.177) 0:06:00.080 **** 2026-02-18 05:57:13.607675 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:13.607686 | orchestrator | 2026-02-18 05:57:13.607697 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 05:57:13.607708 | orchestrator | Wednesday 18 February 2026 05:57:12 +0000 (0:00:01.147) 0:06:01.228 **** 2026-02-18 05:57:13.607720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:13.607732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:13.607744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:13.607775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 05:57:13.607811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:14.858774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:14.858877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:14.858898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 05:57:14.858938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:14.858966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 05:57:14.858978 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:14.858991 | orchestrator | 2026-02-18 05:57:14.859003 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 05:57:14.859015 | orchestrator | Wednesday 18 February 2026 05:57:13 +0000 (0:00:01.240) 0:06:02.468 **** 2026-02-18 05:57:14.859047 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:14.859062 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:14.859074 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:14.859086 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:14.859108 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:14.859125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:14.859146 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:39.270744 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:39.270885 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:39.270921 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 05:57:39.270935 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.270949 | orchestrator | 2026-02-18 05:57:39.270963 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 05:57:39.270975 | orchestrator | Wednesday 18 February 2026 05:57:14 +0000 (0:00:01.258) 0:06:03.727 **** 2026-02-18 05:57:39.270986 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:57:39.270998 | orchestrator | 2026-02-18 05:57:39.271009 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 05:57:39.271020 | orchestrator | Wednesday 18 February 2026 05:57:16 +0000 (0:00:01.535) 0:06:05.262 **** 2026-02-18 05:57:39.271031 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:57:39.271042 | orchestrator | 2026-02-18 05:57:39.271053 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 05:57:39.271080 | orchestrator | Wednesday 18 February 2026 05:57:17 +0000 (0:00:01.165) 0:06:06.427 **** 2026-02-18 05:57:39.271092 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:57:39.271102 | orchestrator | 2026-02-18 05:57:39.271113 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 05:57:39.271124 | orchestrator | Wednesday 18 February 2026 05:57:19 +0000 (0:00:01.550) 0:06:07.978 **** 2026-02-18 05:57:39.271135 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.271146 | orchestrator | 2026-02-18 05:57:39.271156 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 05:57:39.271167 | orchestrator | Wednesday 18 February 2026 05:57:20 +0000 (0:00:01.118) 0:06:09.096 **** 2026-02-18 05:57:39.271178 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.271189 | orchestrator | 2026-02-18 05:57:39.271200 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 05:57:39.271211 | orchestrator | Wednesday 18 February 2026 05:57:21 +0000 (0:00:01.236) 0:06:10.333 **** 2026-02-18 05:57:39.271221 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.271232 | orchestrator | 2026-02-18 05:57:39.271243 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 05:57:39.271257 | orchestrator | Wednesday 18 February 2026 05:57:22 +0000 (0:00:01.127) 0:06:11.461 **** 2026-02-18 05:57:39.271270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:57:39.271282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 05:57:39.271306 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 05:57:39.271318 | orchestrator | 2026-02-18 05:57:39.271330 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 05:57:39.271342 | orchestrator | Wednesday 18 February 2026 05:57:24 +0000 (0:00:01.990) 0:06:13.452 **** 2026-02-18 05:57:39.271355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 05:57:39.271367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 05:57:39.271379 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 05:57:39.271392 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.271404 | orchestrator | 2026-02-18 05:57:39.271416 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 05:57:39.271428 | orchestrator | Wednesday 18 February 2026 05:57:25 +0000 (0:00:01.230) 0:06:14.683 **** 2026-02-18 05:57:39.271439 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.271450 | orchestrator | 2026-02-18 05:57:39.271461 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 05:57:39.271475 | orchestrator | Wednesday 18 February 2026 05:57:26 +0000 (0:00:01.137) 0:06:15.820 **** 2026-02-18 05:57:39.271493 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:57:39.271511 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:57:39.271531 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:57:39.271548 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 05:57:39.271565 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 05:57:39.271582 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 05:57:39.271599 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 05:57:39.271616 | orchestrator | 2026-02-18 05:57:39.271635 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 05:57:39.271721 | orchestrator | Wednesday 18 February 2026 05:57:29 +0000 (0:00:02.181) 0:06:18.002 **** 2026-02-18 05:57:39.271742 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:57:39.271760 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:57:39.271777 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:57:39.271788 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 05:57:39.271799 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 05:57:39.271809 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 05:57:39.271828 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 05:57:39.271839 | orchestrator | 2026-02-18 05:57:39.271850 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-18 05:57:39.271861 | orchestrator | Wednesday 18 February 2026 05:57:32 +0000 (0:00:02.995) 0:06:20.997 **** 2026-02-18 05:57:39.271872 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-18 05:57:39.271882 | orchestrator | 2026-02-18 05:57:39.271893 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-18 05:57:39.271904 | orchestrator | Wednesday 18 February 2026 05:57:34 +0000 (0:00:02.269) 0:06:23.267 **** 2026-02-18 05:57:39.271915 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.271926 | orchestrator | 2026-02-18 05:57:39.271936 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-18 05:57:39.271947 | orchestrator | Wednesday 18 February 2026 05:57:35 +0000 (0:00:01.368) 0:06:24.635 **** 2026-02-18 05:57:39.271958 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:57:39.271978 | orchestrator | 2026-02-18 05:57:39.271989 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-18 05:57:39.272000 | orchestrator | Wednesday 18 February 2026 05:57:36 +0000 (0:00:01.131) 0:06:25.766 **** 2026-02-18 05:57:39.272011 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-18 05:57:39.272022 | orchestrator | 2026-02-18 05:57:39.272033 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-18 05:57:39.272054 | orchestrator | Wednesday 18 February 2026 05:57:39 +0000 (0:00:02.366) 0:06:28.133 **** 2026-02-18 05:58:41.034807 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.034978 | orchestrator | 2026-02-18 05:58:41.034993 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-18 05:58:41.035005 | orchestrator | Wednesday 18 February 2026 05:57:40 +0000 (0:00:01.222) 0:06:29.356 **** 2026-02-18 05:58:41.035016 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:58:41.035026 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 05:58:41.035036 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 05:58:41.035046 | orchestrator | 2026-02-18 05:58:41.035056 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-18 05:58:41.035066 | orchestrator | Wednesday 18 February 2026 05:57:43 +0000 (0:00:02.552) 0:06:31.909 **** 2026-02-18 05:58:41.035076 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-18 05:58:41.035086 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-18 05:58:41.035097 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-18 05:58:41.035107 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-18 05:58:41.035116 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-18 05:58:41.035127 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-18 05:58:41.035136 | orchestrator | 2026-02-18 05:58:41.035146 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-18 05:58:41.035156 | orchestrator | Wednesday 18 February 2026 05:57:56 +0000 (0:00:13.152) 0:06:45.061 **** 2026-02-18 05:58:41.035166 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:58:41.035176 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 05:58:41.035185 | orchestrator | 2026-02-18 05:58:41.035195 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-18 05:58:41.035205 | orchestrator | Wednesday 18 February 2026 05:58:00 +0000 (0:00:03.941) 0:06:49.003 **** 2026-02-18 05:58:41.035215 | orchestrator | changed: [testbed-node-0] 2026-02-18 05:58:41.035225 | orchestrator | 2026-02-18 05:58:41.035235 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 05:58:41.035245 | orchestrator | Wednesday 18 February 2026 05:58:02 +0000 (0:00:02.460) 0:06:51.464 **** 2026-02-18 05:58:41.035255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-18 05:58:41.035264 | orchestrator | 2026-02-18 05:58:41.035275 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 05:58:41.035285 | orchestrator | Wednesday 18 February 2026 05:58:04 +0000 (0:00:01.565) 0:06:53.029 **** 2026-02-18 05:58:41.035295 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-18 05:58:41.035305 | orchestrator | 2026-02-18 05:58:41.035315 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 05:58:41.035325 | orchestrator | Wednesday 18 February 2026 05:58:05 +0000 (0:00:01.582) 0:06:54.611 **** 2026-02-18 05:58:41.035335 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.035370 | orchestrator | 2026-02-18 05:58:41.035380 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 05:58:41.035390 | orchestrator | Wednesday 18 February 2026 05:58:07 +0000 (0:00:01.533) 0:06:56.145 **** 2026-02-18 05:58:41.035399 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035409 | orchestrator | 2026-02-18 05:58:41.035419 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 05:58:41.035429 | orchestrator | Wednesday 18 February 2026 05:58:08 +0000 (0:00:01.166) 0:06:57.312 **** 2026-02-18 05:58:41.035439 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035449 | orchestrator | 2026-02-18 05:58:41.035459 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 05:58:41.035468 | orchestrator | Wednesday 18 February 2026 05:58:09 +0000 (0:00:01.219) 0:06:58.531 **** 2026-02-18 05:58:41.035478 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035488 | orchestrator | 2026-02-18 05:58:41.035512 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 05:58:41.035522 | orchestrator | Wednesday 18 February 2026 05:58:10 +0000 (0:00:01.146) 0:06:59.678 **** 2026-02-18 05:58:41.035532 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.035542 | orchestrator | 2026-02-18 05:58:41.035552 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 05:58:41.035562 | orchestrator | Wednesday 18 February 2026 05:58:12 +0000 (0:00:01.574) 0:07:01.253 **** 2026-02-18 05:58:41.035572 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035581 | orchestrator | 2026-02-18 05:58:41.035591 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 05:58:41.035601 | orchestrator | Wednesday 18 February 2026 05:58:13 +0000 (0:00:01.207) 0:07:02.460 **** 2026-02-18 05:58:41.035611 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035621 | orchestrator | 2026-02-18 05:58:41.035631 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 05:58:41.035640 | orchestrator | Wednesday 18 February 2026 05:58:14 +0000 (0:00:01.136) 0:07:03.596 **** 2026-02-18 05:58:41.035650 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.035660 | orchestrator | 2026-02-18 05:58:41.035670 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 05:58:41.035680 | orchestrator | Wednesday 18 February 2026 05:58:16 +0000 (0:00:01.514) 0:07:05.111 **** 2026-02-18 05:58:41.035689 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.035699 | orchestrator | 2026-02-18 05:58:41.035725 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 05:58:41.035735 | orchestrator | Wednesday 18 February 2026 05:58:17 +0000 (0:00:01.651) 0:07:06.763 **** 2026-02-18 05:58:41.035745 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035755 | orchestrator | 2026-02-18 05:58:41.035764 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 05:58:41.035774 | orchestrator | Wednesday 18 February 2026 05:58:19 +0000 (0:00:01.233) 0:07:07.997 **** 2026-02-18 05:58:41.035783 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.035793 | orchestrator | 2026-02-18 05:58:41.035803 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 05:58:41.035830 | orchestrator | Wednesday 18 February 2026 05:58:20 +0000 (0:00:01.188) 0:07:09.186 **** 2026-02-18 05:58:41.035840 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035850 | orchestrator | 2026-02-18 05:58:41.035860 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 05:58:41.035869 | orchestrator | Wednesday 18 February 2026 05:58:21 +0000 (0:00:01.141) 0:07:10.327 **** 2026-02-18 05:58:41.035879 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035889 | orchestrator | 2026-02-18 05:58:41.035899 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 05:58:41.035908 | orchestrator | Wednesday 18 February 2026 05:58:22 +0000 (0:00:01.138) 0:07:11.466 **** 2026-02-18 05:58:41.035918 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035935 | orchestrator | 2026-02-18 05:58:41.035945 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 05:58:41.035954 | orchestrator | Wednesday 18 February 2026 05:58:23 +0000 (0:00:01.114) 0:07:12.581 **** 2026-02-18 05:58:41.035964 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.035974 | orchestrator | 2026-02-18 05:58:41.035983 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 05:58:41.035993 | orchestrator | Wednesday 18 February 2026 05:58:24 +0000 (0:00:01.157) 0:07:13.739 **** 2026-02-18 05:58:41.036003 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036012 | orchestrator | 2026-02-18 05:58:41.036022 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 05:58:41.036031 | orchestrator | Wednesday 18 February 2026 05:58:25 +0000 (0:00:01.096) 0:07:14.836 **** 2026-02-18 05:58:41.036041 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.036051 | orchestrator | 2026-02-18 05:58:41.036060 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 05:58:41.036070 | orchestrator | Wednesday 18 February 2026 05:58:27 +0000 (0:00:01.171) 0:07:16.008 **** 2026-02-18 05:58:41.036080 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.036089 | orchestrator | 2026-02-18 05:58:41.036099 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 05:58:41.036108 | orchestrator | Wednesday 18 February 2026 05:58:28 +0000 (0:00:01.172) 0:07:17.180 **** 2026-02-18 05:58:41.036118 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:58:41.036128 | orchestrator | 2026-02-18 05:58:41.036138 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 05:58:41.036147 | orchestrator | Wednesday 18 February 2026 05:58:29 +0000 (0:00:01.214) 0:07:18.394 **** 2026-02-18 05:58:41.036157 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036167 | orchestrator | 2026-02-18 05:58:41.036176 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 05:58:41.036186 | orchestrator | Wednesday 18 February 2026 05:58:30 +0000 (0:00:01.159) 0:07:19.554 **** 2026-02-18 05:58:41.036195 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036205 | orchestrator | 2026-02-18 05:58:41.036215 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 05:58:41.036225 | orchestrator | Wednesday 18 February 2026 05:58:31 +0000 (0:00:01.123) 0:07:20.678 **** 2026-02-18 05:58:41.036234 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036244 | orchestrator | 2026-02-18 05:58:41.036253 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 05:58:41.036263 | orchestrator | Wednesday 18 February 2026 05:58:32 +0000 (0:00:01.123) 0:07:21.802 **** 2026-02-18 05:58:41.036273 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036282 | orchestrator | 2026-02-18 05:58:41.036292 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 05:58:41.036302 | orchestrator | Wednesday 18 February 2026 05:58:34 +0000 (0:00:01.128) 0:07:22.930 **** 2026-02-18 05:58:41.036311 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036321 | orchestrator | 2026-02-18 05:58:41.036330 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 05:58:41.036345 | orchestrator | Wednesday 18 February 2026 05:58:35 +0000 (0:00:01.170) 0:07:24.100 **** 2026-02-18 05:58:41.036355 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036365 | orchestrator | 2026-02-18 05:58:41.036375 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 05:58:41.036384 | orchestrator | Wednesday 18 February 2026 05:58:36 +0000 (0:00:01.137) 0:07:25.238 **** 2026-02-18 05:58:41.036394 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036403 | orchestrator | 2026-02-18 05:58:41.036413 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 05:58:41.036423 | orchestrator | Wednesday 18 February 2026 05:58:37 +0000 (0:00:01.088) 0:07:26.327 **** 2026-02-18 05:58:41.036432 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036448 | orchestrator | 2026-02-18 05:58:41.036458 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 05:58:41.036467 | orchestrator | Wednesday 18 February 2026 05:58:38 +0000 (0:00:01.220) 0:07:27.548 **** 2026-02-18 05:58:41.036477 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036487 | orchestrator | 2026-02-18 05:58:41.036496 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 05:58:41.036506 | orchestrator | Wednesday 18 February 2026 05:58:39 +0000 (0:00:01.143) 0:07:28.691 **** 2026-02-18 05:58:41.036515 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:58:41.036525 | orchestrator | 2026-02-18 05:58:41.036535 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 05:58:41.036544 | orchestrator | Wednesday 18 February 2026 05:58:41 +0000 (0:00:01.208) 0:07:29.900 **** 2026-02-18 05:59:33.007726 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.007841 | orchestrator | 2026-02-18 05:59:33.007858 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 05:59:33.007871 | orchestrator | Wednesday 18 February 2026 05:58:42 +0000 (0:00:01.129) 0:07:31.030 **** 2026-02-18 05:59:33.007882 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.007894 | orchestrator | 2026-02-18 05:59:33.007905 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 05:59:33.007916 | orchestrator | Wednesday 18 February 2026 05:58:43 +0000 (0:00:01.117) 0:07:32.147 **** 2026-02-18 05:59:33.007927 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:59:33.007938 | orchestrator | 2026-02-18 05:59:33.008015 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 05:59:33.008027 | orchestrator | Wednesday 18 February 2026 05:58:45 +0000 (0:00:02.017) 0:07:34.164 **** 2026-02-18 05:59:33.008038 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:59:33.008050 | orchestrator | 2026-02-18 05:59:33.008061 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 05:59:33.008073 | orchestrator | Wednesday 18 February 2026 05:58:47 +0000 (0:00:02.322) 0:07:36.487 **** 2026-02-18 05:59:33.008084 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-18 05:59:33.008097 | orchestrator | 2026-02-18 05:59:33.008108 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 05:59:33.008119 | orchestrator | Wednesday 18 February 2026 05:58:49 +0000 (0:00:01.531) 0:07:38.019 **** 2026-02-18 05:59:33.008130 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008141 | orchestrator | 2026-02-18 05:59:33.008152 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 05:59:33.008169 | orchestrator | Wednesday 18 February 2026 05:58:50 +0000 (0:00:01.160) 0:07:39.180 **** 2026-02-18 05:59:33.008188 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008205 | orchestrator | 2026-02-18 05:59:33.008223 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 05:59:33.008243 | orchestrator | Wednesday 18 February 2026 05:58:51 +0000 (0:00:01.173) 0:07:40.354 **** 2026-02-18 05:59:33.008262 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 05:59:33.008283 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 05:59:33.008303 | orchestrator | 2026-02-18 05:59:33.008319 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 05:59:33.008332 | orchestrator | Wednesday 18 February 2026 05:58:53 +0000 (0:00:01.912) 0:07:42.266 **** 2026-02-18 05:59:33.008344 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:59:33.008358 | orchestrator | 2026-02-18 05:59:33.008370 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 05:59:33.008383 | orchestrator | Wednesday 18 February 2026 05:58:55 +0000 (0:00:01.697) 0:07:43.964 **** 2026-02-18 05:59:33.008396 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008409 | orchestrator | 2026-02-18 05:59:33.008421 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 05:59:33.008460 | orchestrator | Wednesday 18 February 2026 05:58:56 +0000 (0:00:01.201) 0:07:45.166 **** 2026-02-18 05:59:33.008473 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008487 | orchestrator | 2026-02-18 05:59:33.008499 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 05:59:33.008509 | orchestrator | Wednesday 18 February 2026 05:58:57 +0000 (0:00:01.168) 0:07:46.335 **** 2026-02-18 05:59:33.008520 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008531 | orchestrator | 2026-02-18 05:59:33.008542 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 05:59:33.008553 | orchestrator | Wednesday 18 February 2026 05:58:58 +0000 (0:00:01.124) 0:07:47.459 **** 2026-02-18 05:59:33.008564 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-18 05:59:33.008574 | orchestrator | 2026-02-18 05:59:33.008585 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 05:59:33.008596 | orchestrator | Wednesday 18 February 2026 05:59:00 +0000 (0:00:01.507) 0:07:48.966 **** 2026-02-18 05:59:33.008606 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:59:33.008617 | orchestrator | 2026-02-18 05:59:33.008628 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 05:59:33.008653 | orchestrator | Wednesday 18 February 2026 05:59:01 +0000 (0:00:01.767) 0:07:50.734 **** 2026-02-18 05:59:33.008664 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 05:59:33.008676 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 05:59:33.008686 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 05:59:33.008697 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008707 | orchestrator | 2026-02-18 05:59:33.008718 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 05:59:33.008729 | orchestrator | Wednesday 18 February 2026 05:59:03 +0000 (0:00:01.161) 0:07:51.895 **** 2026-02-18 05:59:33.008740 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008750 | orchestrator | 2026-02-18 05:59:33.008761 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 05:59:33.008772 | orchestrator | Wednesday 18 February 2026 05:59:04 +0000 (0:00:01.183) 0:07:53.079 **** 2026-02-18 05:59:33.008783 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008793 | orchestrator | 2026-02-18 05:59:33.008804 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 05:59:33.008815 | orchestrator | Wednesday 18 February 2026 05:59:05 +0000 (0:00:01.220) 0:07:54.299 **** 2026-02-18 05:59:33.008825 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008836 | orchestrator | 2026-02-18 05:59:33.008847 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 05:59:33.008875 | orchestrator | Wednesday 18 February 2026 05:59:06 +0000 (0:00:01.222) 0:07:55.522 **** 2026-02-18 05:59:33.008887 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008898 | orchestrator | 2026-02-18 05:59:33.008909 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 05:59:33.008919 | orchestrator | Wednesday 18 February 2026 05:59:07 +0000 (0:00:01.174) 0:07:56.697 **** 2026-02-18 05:59:33.008930 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.008941 | orchestrator | 2026-02-18 05:59:33.008980 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 05:59:33.008991 | orchestrator | Wednesday 18 February 2026 05:59:09 +0000 (0:00:01.219) 0:07:57.917 **** 2026-02-18 05:59:33.009002 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:59:33.009012 | orchestrator | 2026-02-18 05:59:33.009023 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 05:59:33.009034 | orchestrator | Wednesday 18 February 2026 05:59:11 +0000 (0:00:02.562) 0:08:00.479 **** 2026-02-18 05:59:33.009045 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:59:33.009064 | orchestrator | 2026-02-18 05:59:33.009075 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 05:59:33.009086 | orchestrator | Wednesday 18 February 2026 05:59:12 +0000 (0:00:01.154) 0:08:01.634 **** 2026-02-18 05:59:33.009096 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-18 05:59:33.009107 | orchestrator | 2026-02-18 05:59:33.009118 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 05:59:33.009128 | orchestrator | Wednesday 18 February 2026 05:59:14 +0000 (0:00:01.449) 0:08:03.084 **** 2026-02-18 05:59:33.009139 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009150 | orchestrator | 2026-02-18 05:59:33.009161 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 05:59:33.009172 | orchestrator | Wednesday 18 February 2026 05:59:15 +0000 (0:00:01.156) 0:08:04.241 **** 2026-02-18 05:59:33.009182 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009193 | orchestrator | 2026-02-18 05:59:33.009204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 05:59:33.009214 | orchestrator | Wednesday 18 February 2026 05:59:16 +0000 (0:00:01.153) 0:08:05.395 **** 2026-02-18 05:59:33.009233 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009252 | orchestrator | 2026-02-18 05:59:33.009272 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 05:59:33.009291 | orchestrator | Wednesday 18 February 2026 05:59:17 +0000 (0:00:01.163) 0:08:06.558 **** 2026-02-18 05:59:33.009310 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009324 | orchestrator | 2026-02-18 05:59:33.009334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 05:59:33.009345 | orchestrator | Wednesday 18 February 2026 05:59:18 +0000 (0:00:01.206) 0:08:07.765 **** 2026-02-18 05:59:33.009356 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009367 | orchestrator | 2026-02-18 05:59:33.009378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 05:59:33.009388 | orchestrator | Wednesday 18 February 2026 05:59:20 +0000 (0:00:01.181) 0:08:08.946 **** 2026-02-18 05:59:33.009399 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009410 | orchestrator | 2026-02-18 05:59:33.009421 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 05:59:33.009432 | orchestrator | Wednesday 18 February 2026 05:59:21 +0000 (0:00:01.146) 0:08:10.093 **** 2026-02-18 05:59:33.009442 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009453 | orchestrator | 2026-02-18 05:59:33.009463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 05:59:33.009474 | orchestrator | Wednesday 18 February 2026 05:59:22 +0000 (0:00:01.148) 0:08:11.242 **** 2026-02-18 05:59:33.009484 | orchestrator | skipping: [testbed-node-0] 2026-02-18 05:59:33.009495 | orchestrator | 2026-02-18 05:59:33.009506 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 05:59:33.009516 | orchestrator | Wednesday 18 February 2026 05:59:23 +0000 (0:00:01.145) 0:08:12.388 **** 2026-02-18 05:59:33.009527 | orchestrator | ok: [testbed-node-0] 2026-02-18 05:59:33.009537 | orchestrator | 2026-02-18 05:59:33.009548 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 05:59:33.009559 | orchestrator | Wednesday 18 February 2026 05:59:24 +0000 (0:00:01.133) 0:08:13.521 **** 2026-02-18 05:59:33.009569 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-18 05:59:33.009580 | orchestrator | 2026-02-18 05:59:33.009612 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 05:59:33.009635 | orchestrator | Wednesday 18 February 2026 05:59:26 +0000 (0:00:01.487) 0:08:15.009 **** 2026-02-18 05:59:33.009646 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-18 05:59:33.009657 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-18 05:59:33.009668 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-18 05:59:33.009685 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-18 05:59:33.009695 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-18 05:59:33.009706 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-18 05:59:33.009716 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-18 05:59:33.009727 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-18 05:59:33.009738 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 05:59:33.009748 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 05:59:33.009759 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 05:59:33.009769 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 05:59:33.009780 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 05:59:33.009790 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 05:59:33.009808 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-18 06:00:21.430169 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-18 06:00:21.430308 | orchestrator | 2026-02-18 06:00:21.430334 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:00:21.430354 | orchestrator | Wednesday 18 February 2026 05:59:32 +0000 (0:00:06.856) 0:08:21.865 **** 2026-02-18 06:00:21.430370 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430389 | orchestrator | 2026-02-18 06:00:21.430406 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:00:21.430423 | orchestrator | Wednesday 18 February 2026 05:59:34 +0000 (0:00:01.187) 0:08:23.053 **** 2026-02-18 06:00:21.430440 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430457 | orchestrator | 2026-02-18 06:00:21.430474 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:00:21.430492 | orchestrator | Wednesday 18 February 2026 05:59:35 +0000 (0:00:01.159) 0:08:24.213 **** 2026-02-18 06:00:21.430509 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430525 | orchestrator | 2026-02-18 06:00:21.430542 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:00:21.430560 | orchestrator | Wednesday 18 February 2026 05:59:36 +0000 (0:00:01.169) 0:08:25.383 **** 2026-02-18 06:00:21.430577 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430594 | orchestrator | 2026-02-18 06:00:21.430609 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:00:21.430625 | orchestrator | Wednesday 18 February 2026 05:59:37 +0000 (0:00:01.126) 0:08:26.509 **** 2026-02-18 06:00:21.430639 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430654 | orchestrator | 2026-02-18 06:00:21.430670 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:00:21.430685 | orchestrator | Wednesday 18 February 2026 05:59:38 +0000 (0:00:01.326) 0:08:27.836 **** 2026-02-18 06:00:21.430700 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430716 | orchestrator | 2026-02-18 06:00:21.430732 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:00:21.430749 | orchestrator | Wednesday 18 February 2026 05:59:40 +0000 (0:00:01.179) 0:08:29.016 **** 2026-02-18 06:00:21.430765 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430781 | orchestrator | 2026-02-18 06:00:21.430795 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:00:21.430811 | orchestrator | Wednesday 18 February 2026 05:59:41 +0000 (0:00:01.147) 0:08:30.163 **** 2026-02-18 06:00:21.430829 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430845 | orchestrator | 2026-02-18 06:00:21.430861 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:00:21.430877 | orchestrator | Wednesday 18 February 2026 05:59:42 +0000 (0:00:01.164) 0:08:31.327 **** 2026-02-18 06:00:21.430925 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.430942 | orchestrator | 2026-02-18 06:00:21.430957 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:00:21.430974 | orchestrator | Wednesday 18 February 2026 05:59:43 +0000 (0:00:01.121) 0:08:32.449 **** 2026-02-18 06:00:21.430990 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431005 | orchestrator | 2026-02-18 06:00:21.431020 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:00:21.431035 | orchestrator | Wednesday 18 February 2026 05:59:44 +0000 (0:00:01.137) 0:08:33.587 **** 2026-02-18 06:00:21.431050 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431108 | orchestrator | 2026-02-18 06:00:21.431127 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:00:21.431144 | orchestrator | Wednesday 18 February 2026 05:59:45 +0000 (0:00:01.133) 0:08:34.720 **** 2026-02-18 06:00:21.431159 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431172 | orchestrator | 2026-02-18 06:00:21.431187 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:00:21.431201 | orchestrator | Wednesday 18 February 2026 05:59:46 +0000 (0:00:01.139) 0:08:35.860 **** 2026-02-18 06:00:21.431218 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431235 | orchestrator | 2026-02-18 06:00:21.431252 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:00:21.431269 | orchestrator | Wednesday 18 February 2026 05:59:48 +0000 (0:00:01.305) 0:08:37.165 **** 2026-02-18 06:00:21.431285 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431300 | orchestrator | 2026-02-18 06:00:21.431338 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:00:21.431358 | orchestrator | Wednesday 18 February 2026 05:59:49 +0000 (0:00:01.141) 0:08:38.307 **** 2026-02-18 06:00:21.431374 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431388 | orchestrator | 2026-02-18 06:00:21.431402 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:00:21.431419 | orchestrator | Wednesday 18 February 2026 05:59:50 +0000 (0:00:01.213) 0:08:39.521 **** 2026-02-18 06:00:21.431434 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431448 | orchestrator | 2026-02-18 06:00:21.431462 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:00:21.431478 | orchestrator | Wednesday 18 February 2026 05:59:51 +0000 (0:00:01.135) 0:08:40.657 **** 2026-02-18 06:00:21.431493 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431509 | orchestrator | 2026-02-18 06:00:21.431525 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:00:21.431541 | orchestrator | Wednesday 18 February 2026 05:59:52 +0000 (0:00:01.130) 0:08:41.787 **** 2026-02-18 06:00:21.431557 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431574 | orchestrator | 2026-02-18 06:00:21.431590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:00:21.431607 | orchestrator | Wednesday 18 February 2026 05:59:54 +0000 (0:00:01.197) 0:08:42.985 **** 2026-02-18 06:00:21.431623 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431638 | orchestrator | 2026-02-18 06:00:21.431682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:00:21.431695 | orchestrator | Wednesday 18 February 2026 05:59:55 +0000 (0:00:01.170) 0:08:44.156 **** 2026-02-18 06:00:21.431705 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431715 | orchestrator | 2026-02-18 06:00:21.431725 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:00:21.431735 | orchestrator | Wednesday 18 February 2026 05:59:56 +0000 (0:00:01.236) 0:08:45.392 **** 2026-02-18 06:00:21.431745 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431754 | orchestrator | 2026-02-18 06:00:21.431764 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:00:21.431790 | orchestrator | Wednesday 18 February 2026 05:59:57 +0000 (0:00:01.177) 0:08:46.570 **** 2026-02-18 06:00:21.431800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:00:21.431810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:00:21.431819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:00:21.431829 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431839 | orchestrator | 2026-02-18 06:00:21.431847 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:00:21.431855 | orchestrator | Wednesday 18 February 2026 05:59:59 +0000 (0:00:01.461) 0:08:48.032 **** 2026-02-18 06:00:21.431863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:00:21.431871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:00:21.431878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:00:21.431886 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431894 | orchestrator | 2026-02-18 06:00:21.431902 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:00:21.431910 | orchestrator | Wednesday 18 February 2026 06:00:00 +0000 (0:00:01.414) 0:08:49.446 **** 2026-02-18 06:00:21.431918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:00:21.431926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:00:21.431934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:00:21.431942 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431950 | orchestrator | 2026-02-18 06:00:21.431957 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:00:21.431965 | orchestrator | Wednesday 18 February 2026 06:00:02 +0000 (0:00:01.500) 0:08:50.946 **** 2026-02-18 06:00:21.431973 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.431981 | orchestrator | 2026-02-18 06:00:21.431989 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:00:21.431997 | orchestrator | Wednesday 18 February 2026 06:00:03 +0000 (0:00:01.176) 0:08:52.123 **** 2026-02-18 06:00:21.432006 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-18 06:00:21.432014 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.432022 | orchestrator | 2026-02-18 06:00:21.432030 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:00:21.432038 | orchestrator | Wednesday 18 February 2026 06:00:04 +0000 (0:00:01.408) 0:08:53.532 **** 2026-02-18 06:00:21.432046 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:00:21.432054 | orchestrator | 2026-02-18 06:00:21.432103 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:00:21.432112 | orchestrator | Wednesday 18 February 2026 06:00:06 +0000 (0:00:01.788) 0:08:55.321 **** 2026-02-18 06:00:21.432121 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:00:21.432129 | orchestrator | 2026-02-18 06:00:21.432136 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-18 06:00:21.432144 | orchestrator | Wednesday 18 February 2026 06:00:07 +0000 (0:00:01.163) 0:08:56.485 **** 2026-02-18 06:00:21.432152 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-18 06:00:21.432161 | orchestrator | 2026-02-18 06:00:21.432169 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-18 06:00:21.432177 | orchestrator | Wednesday 18 February 2026 06:00:09 +0000 (0:00:01.582) 0:08:58.067 **** 2026-02-18 06:00:21.432185 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:00:21.432193 | orchestrator | 2026-02-18 06:00:21.432200 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-18 06:00:21.432208 | orchestrator | Wednesday 18 February 2026 06:00:12 +0000 (0:00:03.358) 0:09:01.425 **** 2026-02-18 06:00:21.432216 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:00:21.432224 | orchestrator | 2026-02-18 06:00:21.432240 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-18 06:00:21.432254 | orchestrator | Wednesday 18 February 2026 06:00:13 +0000 (0:00:01.330) 0:09:02.756 **** 2026-02-18 06:00:21.432262 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:00:21.432270 | orchestrator | 2026-02-18 06:00:21.432278 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-18 06:00:21.432286 | orchestrator | Wednesday 18 February 2026 06:00:15 +0000 (0:00:01.157) 0:09:03.913 **** 2026-02-18 06:00:21.432294 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:00:21.432302 | orchestrator | 2026-02-18 06:00:21.432311 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-18 06:00:21.432324 | orchestrator | Wednesday 18 February 2026 06:00:16 +0000 (0:00:01.197) 0:09:05.111 **** 2026-02-18 06:00:21.432337 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:00:21.432350 | orchestrator | 2026-02-18 06:00:21.432363 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-18 06:00:21.432376 | orchestrator | Wednesday 18 February 2026 06:00:18 +0000 (0:00:02.050) 0:09:07.161 **** 2026-02-18 06:00:21.432387 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:00:21.432399 | orchestrator | 2026-02-18 06:00:21.432413 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-18 06:00:21.432426 | orchestrator | Wednesday 18 February 2026 06:00:19 +0000 (0:00:01.607) 0:09:08.769 **** 2026-02-18 06:00:21.432438 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:00:21.432451 | orchestrator | 2026-02-18 06:00:21.432473 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-18 06:01:19.268625 | orchestrator | Wednesday 18 February 2026 06:00:21 +0000 (0:00:01.525) 0:09:10.294 **** 2026-02-18 06:01:19.268739 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.268757 | orchestrator | 2026-02-18 06:01:19.268770 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-18 06:01:19.268781 | orchestrator | Wednesday 18 February 2026 06:00:22 +0000 (0:00:01.462) 0:09:11.757 **** 2026-02-18 06:01:19.268792 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.268804 | orchestrator | 2026-02-18 06:01:19.268815 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-18 06:01:19.268826 | orchestrator | Wednesday 18 February 2026 06:00:24 +0000 (0:00:01.821) 0:09:13.579 **** 2026-02-18 06:01:19.268836 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.268847 | orchestrator | 2026-02-18 06:01:19.268858 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-18 06:01:19.268869 | orchestrator | Wednesday 18 February 2026 06:00:26 +0000 (0:00:01.766) 0:09:15.345 **** 2026-02-18 06:01:19.268880 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-18 06:01:19.268892 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 06:01:19.268903 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 06:01:19.268913 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-18 06:01:19.268924 | orchestrator | 2026-02-18 06:01:19.268935 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-18 06:01:19.268946 | orchestrator | Wednesday 18 February 2026 06:00:30 +0000 (0:00:03.963) 0:09:19.309 **** 2026-02-18 06:01:19.268957 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:01:19.268968 | orchestrator | 2026-02-18 06:01:19.268979 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-18 06:01:19.268990 | orchestrator | Wednesday 18 February 2026 06:00:32 +0000 (0:00:02.056) 0:09:21.365 **** 2026-02-18 06:01:19.269000 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269011 | orchestrator | 2026-02-18 06:01:19.269022 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-18 06:01:19.269033 | orchestrator | Wednesday 18 February 2026 06:00:33 +0000 (0:00:01.181) 0:09:22.547 **** 2026-02-18 06:01:19.269044 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269054 | orchestrator | 2026-02-18 06:01:19.269065 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-18 06:01:19.269098 | orchestrator | Wednesday 18 February 2026 06:00:34 +0000 (0:00:01.160) 0:09:23.707 **** 2026-02-18 06:01:19.269109 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269120 | orchestrator | 2026-02-18 06:01:19.269131 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-18 06:01:19.269141 | orchestrator | Wednesday 18 February 2026 06:00:36 +0000 (0:00:02.130) 0:09:25.837 **** 2026-02-18 06:01:19.269152 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269165 | orchestrator | 2026-02-18 06:01:19.269179 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-18 06:01:19.269359 | orchestrator | Wednesday 18 February 2026 06:00:38 +0000 (0:00:01.531) 0:09:27.369 **** 2026-02-18 06:01:19.269373 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:01:19.269387 | orchestrator | 2026-02-18 06:01:19.269399 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-18 06:01:19.269410 | orchestrator | Wednesday 18 February 2026 06:00:39 +0000 (0:00:01.205) 0:09:28.575 **** 2026-02-18 06:01:19.269421 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-18 06:01:19.269433 | orchestrator | 2026-02-18 06:01:19.269444 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-18 06:01:19.269454 | orchestrator | Wednesday 18 February 2026 06:00:41 +0000 (0:00:01.561) 0:09:30.137 **** 2026-02-18 06:01:19.269465 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:01:19.269476 | orchestrator | 2026-02-18 06:01:19.269487 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-18 06:01:19.269498 | orchestrator | Wednesday 18 February 2026 06:00:42 +0000 (0:00:01.136) 0:09:31.274 **** 2026-02-18 06:01:19.269508 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:01:19.269519 | orchestrator | 2026-02-18 06:01:19.269530 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-18 06:01:19.269541 | orchestrator | Wednesday 18 February 2026 06:00:43 +0000 (0:00:01.137) 0:09:32.411 **** 2026-02-18 06:01:19.269551 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-18 06:01:19.269562 | orchestrator | 2026-02-18 06:01:19.269587 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-18 06:01:19.269598 | orchestrator | Wednesday 18 February 2026 06:00:45 +0000 (0:00:01.572) 0:09:33.984 **** 2026-02-18 06:01:19.269609 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269620 | orchestrator | 2026-02-18 06:01:19.269630 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-18 06:01:19.269642 | orchestrator | Wednesday 18 February 2026 06:00:47 +0000 (0:00:02.340) 0:09:36.324 **** 2026-02-18 06:01:19.269660 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269677 | orchestrator | 2026-02-18 06:01:19.269695 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-18 06:01:19.269714 | orchestrator | Wednesday 18 February 2026 06:00:49 +0000 (0:00:02.020) 0:09:38.344 **** 2026-02-18 06:01:19.269734 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269753 | orchestrator | 2026-02-18 06:01:19.269770 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-18 06:01:19.269781 | orchestrator | Wednesday 18 February 2026 06:00:51 +0000 (0:00:02.456) 0:09:40.801 **** 2026-02-18 06:01:19.269792 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:01:19.269803 | orchestrator | 2026-02-18 06:01:19.269815 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-18 06:01:19.269826 | orchestrator | Wednesday 18 February 2026 06:00:55 +0000 (0:00:03.328) 0:09:44.129 **** 2026-02-18 06:01:19.269842 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-18 06:01:19.269858 | orchestrator | 2026-02-18 06:01:19.269895 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-18 06:01:19.269913 | orchestrator | Wednesday 18 February 2026 06:00:56 +0000 (0:00:01.578) 0:09:45.707 **** 2026-02-18 06:01:19.269924 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269947 | orchestrator | 2026-02-18 06:01:19.269958 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-18 06:01:19.269969 | orchestrator | Wednesday 18 February 2026 06:00:59 +0000 (0:00:02.284) 0:09:47.992 **** 2026-02-18 06:01:19.269981 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:19.269999 | orchestrator | 2026-02-18 06:01:19.270011 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-18 06:01:19.270089 | orchestrator | Wednesday 18 February 2026 06:01:02 +0000 (0:00:03.174) 0:09:51.166 **** 2026-02-18 06:01:19.270100 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:01:19.270111 | orchestrator | 2026-02-18 06:01:19.270123 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-18 06:01:19.270133 | orchestrator | Wednesday 18 February 2026 06:01:03 +0000 (0:00:01.121) 0:09:52.288 **** 2026-02-18 06:01:19.270146 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-18 06:01:19.270160 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-18 06:01:19.270171 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-18 06:01:19.270183 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-18 06:01:19.270242 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-18 06:01:19.270255 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}])  2026-02-18 06:01:19.270267 | orchestrator | 2026-02-18 06:01:19.270286 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-18 06:01:19.270297 | orchestrator | Wednesday 18 February 2026 06:01:13 +0000 (0:00:09.747) 0:10:02.036 **** 2026-02-18 06:01:19.270308 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:01:19.270319 | orchestrator | 2026-02-18 06:01:19.270330 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:01:19.270341 | orchestrator | Wednesday 18 February 2026 06:01:15 +0000 (0:00:02.500) 0:10:04.536 **** 2026-02-18 06:01:19.270352 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:01:19.270363 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 06:01:19.270383 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 06:01:19.270393 | orchestrator | 2026-02-18 06:01:19.270404 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:01:19.270415 | orchestrator | Wednesday 18 February 2026 06:01:17 +0000 (0:00:02.219) 0:10:06.756 **** 2026-02-18 06:01:19.270426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 06:01:19.270437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 06:01:19.270448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 06:01:19.270458 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:01:19.270469 | orchestrator | 2026-02-18 06:01:19.270480 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-18 06:01:19.270501 | orchestrator | Wednesday 18 February 2026 06:01:19 +0000 (0:00:01.368) 0:10:08.125 **** 2026-02-18 06:01:49.363875 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:01:49.363969 | orchestrator | 2026-02-18 06:01:49.363979 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-18 06:01:49.363987 | orchestrator | Wednesday 18 February 2026 06:01:20 +0000 (0:00:01.251) 0:10:09.377 **** 2026-02-18 06:01:49.363995 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:01:49.364003 | orchestrator | 2026-02-18 06:01:49.364010 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-18 06:01:49.364016 | orchestrator | 2026-02-18 06:01:49.364024 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-18 06:01:49.364031 | orchestrator | Wednesday 18 February 2026 06:01:23 +0000 (0:00:02.685) 0:10:12.062 **** 2026-02-18 06:01:49.364037 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364044 | orchestrator | 2026-02-18 06:01:49.364051 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-18 06:01:49.364058 | orchestrator | Wednesday 18 February 2026 06:01:24 +0000 (0:00:01.161) 0:10:13.224 **** 2026-02-18 06:01:49.364064 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364071 | orchestrator | 2026-02-18 06:01:49.364078 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-18 06:01:49.364084 | orchestrator | Wednesday 18 February 2026 06:01:25 +0000 (0:00:00.798) 0:10:14.022 **** 2026-02-18 06:01:49.364091 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:01:49.364098 | orchestrator | 2026-02-18 06:01:49.364104 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-18 06:01:49.364111 | orchestrator | Wednesday 18 February 2026 06:01:25 +0000 (0:00:00.820) 0:10:14.843 **** 2026-02-18 06:01:49.364118 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364124 | orchestrator | 2026-02-18 06:01:49.364131 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:01:49.364138 | orchestrator | Wednesday 18 February 2026 06:01:26 +0000 (0:00:00.806) 0:10:15.649 **** 2026-02-18 06:01:49.364144 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-18 06:01:49.364151 | orchestrator | 2026-02-18 06:01:49.364158 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:01:49.364164 | orchestrator | Wednesday 18 February 2026 06:01:27 +0000 (0:00:01.119) 0:10:16.769 **** 2026-02-18 06:01:49.364171 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364177 | orchestrator | 2026-02-18 06:01:49.364184 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:01:49.364190 | orchestrator | Wednesday 18 February 2026 06:01:29 +0000 (0:00:01.547) 0:10:18.316 **** 2026-02-18 06:01:49.364197 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364204 | orchestrator | 2026-02-18 06:01:49.364210 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:01:49.364217 | orchestrator | Wednesday 18 February 2026 06:01:30 +0000 (0:00:01.163) 0:10:19.480 **** 2026-02-18 06:01:49.364223 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364230 | orchestrator | 2026-02-18 06:01:49.364237 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:01:49.364329 | orchestrator | Wednesday 18 February 2026 06:01:32 +0000 (0:00:01.449) 0:10:20.930 **** 2026-02-18 06:01:49.364343 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364354 | orchestrator | 2026-02-18 06:01:49.364367 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:01:49.364379 | orchestrator | Wednesday 18 February 2026 06:01:33 +0000 (0:00:01.139) 0:10:22.070 **** 2026-02-18 06:01:49.364391 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364400 | orchestrator | 2026-02-18 06:01:49.364407 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:01:49.364413 | orchestrator | Wednesday 18 February 2026 06:01:34 +0000 (0:00:01.134) 0:10:23.204 **** 2026-02-18 06:01:49.364420 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364427 | orchestrator | 2026-02-18 06:01:49.364433 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:01:49.364441 | orchestrator | Wednesday 18 February 2026 06:01:35 +0000 (0:00:01.145) 0:10:24.349 **** 2026-02-18 06:01:49.364449 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:01:49.364456 | orchestrator | 2026-02-18 06:01:49.364464 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:01:49.364472 | orchestrator | Wednesday 18 February 2026 06:01:36 +0000 (0:00:01.182) 0:10:25.532 **** 2026-02-18 06:01:49.364479 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364487 | orchestrator | 2026-02-18 06:01:49.364495 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:01:49.364514 | orchestrator | Wednesday 18 February 2026 06:01:37 +0000 (0:00:01.179) 0:10:26.712 **** 2026-02-18 06:01:49.364522 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:01:49.364530 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:01:49.364538 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:01:49.364546 | orchestrator | 2026-02-18 06:01:49.364553 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:01:49.364561 | orchestrator | Wednesday 18 February 2026 06:01:39 +0000 (0:00:01.743) 0:10:28.456 **** 2026-02-18 06:01:49.364568 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:01:49.364576 | orchestrator | 2026-02-18 06:01:49.364583 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:01:49.364590 | orchestrator | Wednesday 18 February 2026 06:01:40 +0000 (0:00:01.296) 0:10:29.753 **** 2026-02-18 06:01:49.364598 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:01:49.364605 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:01:49.364613 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:01:49.364621 | orchestrator | 2026-02-18 06:01:49.364629 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:01:49.364636 | orchestrator | Wednesday 18 February 2026 06:01:43 +0000 (0:00:02.934) 0:10:32.688 **** 2026-02-18 06:01:49.364657 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 06:01:49.364665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 06:01:49.364673 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 06:01:49.364681 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:01:49.364688 | orchestrator | 2026-02-18 06:01:49.364696 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:01:49.364704 | orchestrator | Wednesday 18 February 2026 06:01:45 +0000 (0:00:01.442) 0:10:34.131 **** 2026-02-18 06:01:49.364713 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:01:49.364730 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:01:49.364737 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:01:49.364745 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:01:49.364753 | orchestrator | 2026-02-18 06:01:49.364761 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:01:49.364768 | orchestrator | Wednesday 18 February 2026 06:01:46 +0000 (0:00:01.709) 0:10:35.840 **** 2026-02-18 06:01:49.364779 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:01:49.364791 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:01:49.364798 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:01:49.364805 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:01:49.364811 | orchestrator | 2026-02-18 06:01:49.364818 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:01:49.364825 | orchestrator | Wednesday 18 February 2026 06:01:48 +0000 (0:00:01.173) 0:10:37.014 **** 2026-02-18 06:01:49.364837 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:01:41.415079', 'end': '2026-02-18 06:01:41.462821', 'delta': '0:00:00.047742', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:01:49.364853 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '4c84206aa4db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:01:41.982672', 'end': '2026-02-18 06:01:42.045934', 'delta': '0:00:00.063262', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4c84206aa4db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:02:08.401988 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '11fb53bc1513', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:01:42.557724', 'end': '2026-02-18 06:01:42.603032', 'delta': '0:00:00.045308', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['11fb53bc1513'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:02:08.402170 | orchestrator | 2026-02-18 06:02:08.402189 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:02:08.402202 | orchestrator | Wednesday 18 February 2026 06:01:49 +0000 (0:00:01.216) 0:10:38.231 **** 2026-02-18 06:02:08.402213 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:02:08.402226 | orchestrator | 2026-02-18 06:02:08.402237 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:02:08.402249 | orchestrator | Wednesday 18 February 2026 06:01:50 +0000 (0:00:01.272) 0:10:39.503 **** 2026-02-18 06:02:08.402260 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402272 | orchestrator | 2026-02-18 06:02:08.402283 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:02:08.402358 | orchestrator | Wednesday 18 February 2026 06:01:51 +0000 (0:00:01.277) 0:10:40.781 **** 2026-02-18 06:02:08.402369 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:02:08.402380 | orchestrator | 2026-02-18 06:02:08.402391 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:02:08.402402 | orchestrator | Wednesday 18 February 2026 06:01:53 +0000 (0:00:01.134) 0:10:41.915 **** 2026-02-18 06:02:08.402413 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:02:08.402424 | orchestrator | 2026-02-18 06:02:08.402435 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:02:08.402446 | orchestrator | Wednesday 18 February 2026 06:01:55 +0000 (0:00:02.360) 0:10:44.276 **** 2026-02-18 06:02:08.402457 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:02:08.402468 | orchestrator | 2026-02-18 06:02:08.402479 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:02:08.402491 | orchestrator | Wednesday 18 February 2026 06:01:56 +0000 (0:00:01.141) 0:10:45.418 **** 2026-02-18 06:02:08.402502 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402513 | orchestrator | 2026-02-18 06:02:08.402527 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:02:08.402541 | orchestrator | Wednesday 18 February 2026 06:01:57 +0000 (0:00:01.160) 0:10:46.579 **** 2026-02-18 06:02:08.402553 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402566 | orchestrator | 2026-02-18 06:02:08.402579 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:02:08.402592 | orchestrator | Wednesday 18 February 2026 06:01:59 +0000 (0:00:01.368) 0:10:47.947 **** 2026-02-18 06:02:08.402605 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402618 | orchestrator | 2026-02-18 06:02:08.402631 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:02:08.402644 | orchestrator | Wednesday 18 February 2026 06:02:00 +0000 (0:00:01.171) 0:10:49.119 **** 2026-02-18 06:02:08.402656 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402670 | orchestrator | 2026-02-18 06:02:08.402683 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:02:08.402695 | orchestrator | Wednesday 18 February 2026 06:02:01 +0000 (0:00:01.132) 0:10:50.251 **** 2026-02-18 06:02:08.402709 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402722 | orchestrator | 2026-02-18 06:02:08.402750 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:02:08.402785 | orchestrator | Wednesday 18 February 2026 06:02:02 +0000 (0:00:01.140) 0:10:51.391 **** 2026-02-18 06:02:08.402797 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402808 | orchestrator | 2026-02-18 06:02:08.402819 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:02:08.402830 | orchestrator | Wednesday 18 February 2026 06:02:03 +0000 (0:00:01.193) 0:10:52.585 **** 2026-02-18 06:02:08.402841 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402852 | orchestrator | 2026-02-18 06:02:08.402862 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:02:08.402873 | orchestrator | Wednesday 18 February 2026 06:02:04 +0000 (0:00:01.143) 0:10:53.728 **** 2026-02-18 06:02:08.402884 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402895 | orchestrator | 2026-02-18 06:02:08.402906 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:02:08.402917 | orchestrator | Wednesday 18 February 2026 06:02:06 +0000 (0:00:01.153) 0:10:54.881 **** 2026-02-18 06:02:08.402928 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:08.402939 | orchestrator | 2026-02-18 06:02:08.402950 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:02:08.402961 | orchestrator | Wednesday 18 February 2026 06:02:07 +0000 (0:00:01.131) 0:10:56.013 **** 2026-02-18 06:02:08.402992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:08.403007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:08.403018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:08.403031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:02:08.403044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:08.403055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:08.403074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:08.403105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '907e2eef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:02:09.615422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:09.615529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:02:09.615545 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:09.615558 | orchestrator | 2026-02-18 06:02:09.615571 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:02:09.615583 | orchestrator | Wednesday 18 February 2026 06:02:08 +0000 (0:00:01.246) 0:10:57.260 **** 2026-02-18 06:02:09.615597 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615649 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615663 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615675 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615706 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615719 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615730 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615751 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '907e2eef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:09.615862 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:45.799182 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:02:45.799326 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799342 | orchestrator | 2026-02-18 06:02:45.799350 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:02:45.799411 | orchestrator | Wednesday 18 February 2026 06:02:09 +0000 (0:00:01.224) 0:10:58.485 **** 2026-02-18 06:02:45.799423 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:02:45.799435 | orchestrator | 2026-02-18 06:02:45.799445 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:02:45.799456 | orchestrator | Wednesday 18 February 2026 06:02:12 +0000 (0:00:02.520) 0:11:01.005 **** 2026-02-18 06:02:45.799467 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:02:45.799477 | orchestrator | 2026-02-18 06:02:45.799487 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:02:45.799498 | orchestrator | Wednesday 18 February 2026 06:02:13 +0000 (0:00:01.175) 0:11:02.181 **** 2026-02-18 06:02:45.799509 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:02:45.799520 | orchestrator | 2026-02-18 06:02:45.799530 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:02:45.799538 | orchestrator | Wednesday 18 February 2026 06:02:14 +0000 (0:00:01.518) 0:11:03.700 **** 2026-02-18 06:02:45.799544 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799551 | orchestrator | 2026-02-18 06:02:45.799557 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:02:45.799564 | orchestrator | Wednesday 18 February 2026 06:02:15 +0000 (0:00:01.117) 0:11:04.818 **** 2026-02-18 06:02:45.799570 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799576 | orchestrator | 2026-02-18 06:02:45.799583 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:02:45.799602 | orchestrator | Wednesday 18 February 2026 06:02:17 +0000 (0:00:01.217) 0:11:06.035 **** 2026-02-18 06:02:45.799609 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799615 | orchestrator | 2026-02-18 06:02:45.799622 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:02:45.799628 | orchestrator | Wednesday 18 February 2026 06:02:18 +0000 (0:00:01.170) 0:11:07.205 **** 2026-02-18 06:02:45.799634 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-18 06:02:45.799641 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:02:45.799647 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-18 06:02:45.799653 | orchestrator | 2026-02-18 06:02:45.799660 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:02:45.799666 | orchestrator | Wednesday 18 February 2026 06:02:20 +0000 (0:00:01.776) 0:11:08.982 **** 2026-02-18 06:02:45.799672 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 06:02:45.799678 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 06:02:45.799685 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 06:02:45.799691 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799697 | orchestrator | 2026-02-18 06:02:45.799703 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:02:45.799709 | orchestrator | Wednesday 18 February 2026 06:02:21 +0000 (0:00:01.246) 0:11:10.229 **** 2026-02-18 06:02:45.799717 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799724 | orchestrator | 2026-02-18 06:02:45.799732 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:02:45.799739 | orchestrator | Wednesday 18 February 2026 06:02:22 +0000 (0:00:01.138) 0:11:11.367 **** 2026-02-18 06:02:45.799747 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:02:45.799755 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:02:45.799763 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:02:45.799770 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:02:45.799784 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:02:45.799791 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:02:45.799799 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:02:45.799806 | orchestrator | 2026-02-18 06:02:45.799813 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:02:45.799821 | orchestrator | Wednesday 18 February 2026 06:02:24 +0000 (0:00:02.193) 0:11:13.561 **** 2026-02-18 06:02:45.799828 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:02:45.799835 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:02:45.799843 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:02:45.799850 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:02:45.799872 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:02:45.799880 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:02:45.799888 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:02:45.799895 | orchestrator | 2026-02-18 06:02:45.799902 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-18 06:02:45.799909 | orchestrator | Wednesday 18 February 2026 06:02:26 +0000 (0:00:02.234) 0:11:15.796 **** 2026-02-18 06:02:45.799916 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799924 | orchestrator | 2026-02-18 06:02:45.799931 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-18 06:02:45.799938 | orchestrator | Wednesday 18 February 2026 06:02:27 +0000 (0:00:00.905) 0:11:16.702 **** 2026-02-18 06:02:45.799945 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799952 | orchestrator | 2026-02-18 06:02:45.799959 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-18 06:02:45.799966 | orchestrator | Wednesday 18 February 2026 06:02:28 +0000 (0:00:00.883) 0:11:17.585 **** 2026-02-18 06:02:45.799974 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.799981 | orchestrator | 2026-02-18 06:02:45.799988 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-18 06:02:45.799996 | orchestrator | Wednesday 18 February 2026 06:02:29 +0000 (0:00:00.847) 0:11:18.433 **** 2026-02-18 06:02:45.800003 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.800010 | orchestrator | 2026-02-18 06:02:45.800017 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-18 06:02:45.800025 | orchestrator | Wednesday 18 February 2026 06:02:30 +0000 (0:00:01.273) 0:11:19.706 **** 2026-02-18 06:02:45.800032 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.800039 | orchestrator | 2026-02-18 06:02:45.800046 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-18 06:02:45.800054 | orchestrator | Wednesday 18 February 2026 06:02:31 +0000 (0:00:00.785) 0:11:20.492 **** 2026-02-18 06:02:45.800061 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 06:02:45.800069 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 06:02:45.800076 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 06:02:45.800082 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.800088 | orchestrator | 2026-02-18 06:02:45.800095 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-18 06:02:45.800104 | orchestrator | Wednesday 18 February 2026 06:02:32 +0000 (0:00:01.094) 0:11:21.587 **** 2026-02-18 06:02:45.800111 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-18 06:02:45.800117 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-18 06:02:45.800128 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-18 06:02:45.800134 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-18 06:02:45.800140 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-18 06:02:45.800147 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-18 06:02:45.800153 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.800159 | orchestrator | 2026-02-18 06:02:45.800165 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-18 06:02:45.800171 | orchestrator | Wednesday 18 February 2026 06:02:34 +0000 (0:00:01.389) 0:11:22.977 **** 2026-02-18 06:02:45.800178 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:02:45.800184 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:02:45.800190 | orchestrator | 2026-02-18 06:02:45.800196 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-18 06:02:45.800202 | orchestrator | Wednesday 18 February 2026 06:02:37 +0000 (0:00:03.215) 0:11:26.192 **** 2026-02-18 06:02:45.800209 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:02:45.800215 | orchestrator | 2026-02-18 06:02:45.800221 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:02:45.800227 | orchestrator | Wednesday 18 February 2026 06:02:39 +0000 (0:00:02.274) 0:11:28.466 **** 2026-02-18 06:02:45.800234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-18 06:02:45.800241 | orchestrator | 2026-02-18 06:02:45.800250 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:02:45.800261 | orchestrator | Wednesday 18 February 2026 06:02:40 +0000 (0:00:01.169) 0:11:29.636 **** 2026-02-18 06:02:45.800271 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-18 06:02:45.800281 | orchestrator | 2026-02-18 06:02:45.800291 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:02:45.800300 | orchestrator | Wednesday 18 February 2026 06:02:41 +0000 (0:00:01.143) 0:11:30.780 **** 2026-02-18 06:02:45.800309 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:02:45.800319 | orchestrator | 2026-02-18 06:02:45.800329 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:02:45.800339 | orchestrator | Wednesday 18 February 2026 06:02:43 +0000 (0:00:01.552) 0:11:32.333 **** 2026-02-18 06:02:45.800349 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.800389 | orchestrator | 2026-02-18 06:02:45.800401 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:02:45.800412 | orchestrator | Wednesday 18 February 2026 06:02:44 +0000 (0:00:01.147) 0:11:33.480 **** 2026-02-18 06:02:45.800422 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:02:45.800432 | orchestrator | 2026-02-18 06:02:45.800443 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:02:45.800462 | orchestrator | Wednesday 18 February 2026 06:02:45 +0000 (0:00:01.182) 0:11:34.663 **** 2026-02-18 06:03:27.813201 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813282 | orchestrator | 2026-02-18 06:03:27.813289 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:03:27.813295 | orchestrator | Wednesday 18 February 2026 06:02:46 +0000 (0:00:01.126) 0:11:35.790 **** 2026-02-18 06:03:27.813299 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813304 | orchestrator | 2026-02-18 06:03:27.813308 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:03:27.813312 | orchestrator | Wednesday 18 February 2026 06:02:48 +0000 (0:00:01.607) 0:11:37.397 **** 2026-02-18 06:03:27.813316 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813320 | orchestrator | 2026-02-18 06:03:27.813324 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:03:27.813344 | orchestrator | Wednesday 18 February 2026 06:02:49 +0000 (0:00:01.142) 0:11:38.540 **** 2026-02-18 06:03:27.813348 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813352 | orchestrator | 2026-02-18 06:03:27.813356 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:03:27.813359 | orchestrator | Wednesday 18 February 2026 06:02:50 +0000 (0:00:01.179) 0:11:39.720 **** 2026-02-18 06:03:27.813363 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813367 | orchestrator | 2026-02-18 06:03:27.813371 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:03:27.813374 | orchestrator | Wednesday 18 February 2026 06:02:52 +0000 (0:00:01.590) 0:11:41.311 **** 2026-02-18 06:03:27.813378 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813382 | orchestrator | 2026-02-18 06:03:27.813386 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:03:27.813390 | orchestrator | Wednesday 18 February 2026 06:02:54 +0000 (0:00:01.578) 0:11:42.889 **** 2026-02-18 06:03:27.813393 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813397 | orchestrator | 2026-02-18 06:03:27.813401 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:03:27.813405 | orchestrator | Wednesday 18 February 2026 06:02:54 +0000 (0:00:00.764) 0:11:43.654 **** 2026-02-18 06:03:27.813409 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813413 | orchestrator | 2026-02-18 06:03:27.813417 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:03:27.813420 | orchestrator | Wednesday 18 February 2026 06:02:55 +0000 (0:00:00.795) 0:11:44.450 **** 2026-02-18 06:03:27.813424 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813428 | orchestrator | 2026-02-18 06:03:27.813432 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:03:27.813483 | orchestrator | Wednesday 18 February 2026 06:02:56 +0000 (0:00:00.798) 0:11:45.249 **** 2026-02-18 06:03:27.813487 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813491 | orchestrator | 2026-02-18 06:03:27.813495 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:03:27.813499 | orchestrator | Wednesday 18 February 2026 06:02:57 +0000 (0:00:00.783) 0:11:46.032 **** 2026-02-18 06:03:27.813502 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813506 | orchestrator | 2026-02-18 06:03:27.813510 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:03:27.813514 | orchestrator | Wednesday 18 February 2026 06:02:57 +0000 (0:00:00.806) 0:11:46.838 **** 2026-02-18 06:03:27.813517 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813521 | orchestrator | 2026-02-18 06:03:27.813525 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:03:27.813529 | orchestrator | Wednesday 18 February 2026 06:02:58 +0000 (0:00:00.919) 0:11:47.758 **** 2026-02-18 06:03:27.813541 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813547 | orchestrator | 2026-02-18 06:03:27.813553 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:03:27.813558 | orchestrator | Wednesday 18 February 2026 06:02:59 +0000 (0:00:00.814) 0:11:48.573 **** 2026-02-18 06:03:27.813564 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813569 | orchestrator | 2026-02-18 06:03:27.813575 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:03:27.813581 | orchestrator | Wednesday 18 February 2026 06:03:00 +0000 (0:00:00.865) 0:11:49.438 **** 2026-02-18 06:03:27.813587 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813593 | orchestrator | 2026-02-18 06:03:27.813600 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:03:27.813606 | orchestrator | Wednesday 18 February 2026 06:03:01 +0000 (0:00:00.812) 0:11:50.251 **** 2026-02-18 06:03:27.813612 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813619 | orchestrator | 2026-02-18 06:03:27.813623 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:03:27.813632 | orchestrator | Wednesday 18 February 2026 06:03:02 +0000 (0:00:00.794) 0:11:51.046 **** 2026-02-18 06:03:27.813636 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813640 | orchestrator | 2026-02-18 06:03:27.813643 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:03:27.813647 | orchestrator | Wednesday 18 February 2026 06:03:02 +0000 (0:00:00.814) 0:11:51.861 **** 2026-02-18 06:03:27.813651 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813654 | orchestrator | 2026-02-18 06:03:27.813658 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:03:27.813662 | orchestrator | Wednesday 18 February 2026 06:03:03 +0000 (0:00:00.795) 0:11:52.656 **** 2026-02-18 06:03:27.813666 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813669 | orchestrator | 2026-02-18 06:03:27.813673 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:03:27.813677 | orchestrator | Wednesday 18 February 2026 06:03:04 +0000 (0:00:00.762) 0:11:53.419 **** 2026-02-18 06:03:27.813681 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813685 | orchestrator | 2026-02-18 06:03:27.813689 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:03:27.813692 | orchestrator | Wednesday 18 February 2026 06:03:05 +0000 (0:00:00.797) 0:11:54.216 **** 2026-02-18 06:03:27.813696 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813700 | orchestrator | 2026-02-18 06:03:27.813714 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:03:27.813718 | orchestrator | Wednesday 18 February 2026 06:03:06 +0000 (0:00:00.767) 0:11:54.984 **** 2026-02-18 06:03:27.813722 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813726 | orchestrator | 2026-02-18 06:03:27.813730 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:03:27.813733 | orchestrator | Wednesday 18 February 2026 06:03:06 +0000 (0:00:00.775) 0:11:55.759 **** 2026-02-18 06:03:27.813737 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813741 | orchestrator | 2026-02-18 06:03:27.813745 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:03:27.813749 | orchestrator | Wednesday 18 February 2026 06:03:07 +0000 (0:00:00.789) 0:11:56.549 **** 2026-02-18 06:03:27.813753 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813757 | orchestrator | 2026-02-18 06:03:27.813760 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:03:27.813764 | orchestrator | Wednesday 18 February 2026 06:03:08 +0000 (0:00:00.808) 0:11:57.358 **** 2026-02-18 06:03:27.813768 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813772 | orchestrator | 2026-02-18 06:03:27.813775 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:03:27.813779 | orchestrator | Wednesday 18 February 2026 06:03:09 +0000 (0:00:00.810) 0:11:58.168 **** 2026-02-18 06:03:27.813784 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813788 | orchestrator | 2026-02-18 06:03:27.813793 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:03:27.813797 | orchestrator | Wednesday 18 February 2026 06:03:10 +0000 (0:00:00.815) 0:11:58.983 **** 2026-02-18 06:03:27.813802 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813806 | orchestrator | 2026-02-18 06:03:27.813810 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:03:27.813815 | orchestrator | Wednesday 18 February 2026 06:03:11 +0000 (0:00:00.911) 0:11:59.895 **** 2026-02-18 06:03:27.813819 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813823 | orchestrator | 2026-02-18 06:03:27.813828 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:03:27.813832 | orchestrator | Wednesday 18 February 2026 06:03:11 +0000 (0:00:00.795) 0:12:00.690 **** 2026-02-18 06:03:27.813837 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813841 | orchestrator | 2026-02-18 06:03:27.813845 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:03:27.813853 | orchestrator | Wednesday 18 February 2026 06:03:13 +0000 (0:00:01.616) 0:12:02.306 **** 2026-02-18 06:03:27.813861 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813866 | orchestrator | 2026-02-18 06:03:27.813870 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:03:27.813874 | orchestrator | Wednesday 18 February 2026 06:03:15 +0000 (0:00:02.125) 0:12:04.432 **** 2026-02-18 06:03:27.813879 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-18 06:03:27.813884 | orchestrator | 2026-02-18 06:03:27.813889 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:03:27.813893 | orchestrator | Wednesday 18 February 2026 06:03:16 +0000 (0:00:01.100) 0:12:05.533 **** 2026-02-18 06:03:27.813897 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813902 | orchestrator | 2026-02-18 06:03:27.813906 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:03:27.813910 | orchestrator | Wednesday 18 February 2026 06:03:17 +0000 (0:00:01.127) 0:12:06.660 **** 2026-02-18 06:03:27.813916 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.813922 | orchestrator | 2026-02-18 06:03:27.813928 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:03:27.813936 | orchestrator | Wednesday 18 February 2026 06:03:18 +0000 (0:00:01.134) 0:12:07.795 **** 2026-02-18 06:03:27.813942 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:03:27.813949 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:03:27.813955 | orchestrator | 2026-02-18 06:03:27.813961 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:03:27.813967 | orchestrator | Wednesday 18 February 2026 06:03:20 +0000 (0:00:01.813) 0:12:09.609 **** 2026-02-18 06:03:27.813973 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.813979 | orchestrator | 2026-02-18 06:03:27.813985 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:03:27.813991 | orchestrator | Wednesday 18 February 2026 06:03:22 +0000 (0:00:01.517) 0:12:11.126 **** 2026-02-18 06:03:27.813997 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.814003 | orchestrator | 2026-02-18 06:03:27.814009 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:03:27.814053 | orchestrator | Wednesday 18 February 2026 06:03:23 +0000 (0:00:01.137) 0:12:12.263 **** 2026-02-18 06:03:27.814058 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.814063 | orchestrator | 2026-02-18 06:03:27.814067 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:03:27.814072 | orchestrator | Wednesday 18 February 2026 06:03:24 +0000 (0:00:00.795) 0:12:13.059 **** 2026-02-18 06:03:27.814076 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:03:27.814080 | orchestrator | 2026-02-18 06:03:27.814085 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:03:27.814089 | orchestrator | Wednesday 18 February 2026 06:03:24 +0000 (0:00:00.789) 0:12:13.849 **** 2026-02-18 06:03:27.814093 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-18 06:03:27.814098 | orchestrator | 2026-02-18 06:03:27.814102 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:03:27.814107 | orchestrator | Wednesday 18 February 2026 06:03:26 +0000 (0:00:01.095) 0:12:14.944 **** 2026-02-18 06:03:27.814111 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:03:27.814115 | orchestrator | 2026-02-18 06:03:27.814125 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:03:27.814133 | orchestrator | Wednesday 18 February 2026 06:03:27 +0000 (0:00:01.733) 0:12:16.678 **** 2026-02-18 06:04:08.468410 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:04:08.468547 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:04:08.468587 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:04:08.468600 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.468614 | orchestrator | 2026-02-18 06:04:08.468626 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:04:08.468637 | orchestrator | Wednesday 18 February 2026 06:03:29 +0000 (0:00:01.233) 0:12:17.912 **** 2026-02-18 06:04:08.468648 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.468660 | orchestrator | 2026-02-18 06:04:08.468671 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:04:08.468682 | orchestrator | Wednesday 18 February 2026 06:03:30 +0000 (0:00:01.131) 0:12:19.043 **** 2026-02-18 06:04:08.468693 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.468704 | orchestrator | 2026-02-18 06:04:08.468715 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:04:08.468727 | orchestrator | Wednesday 18 February 2026 06:03:31 +0000 (0:00:01.226) 0:12:20.270 **** 2026-02-18 06:04:08.468738 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.468749 | orchestrator | 2026-02-18 06:04:08.468760 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:04:08.468771 | orchestrator | Wednesday 18 February 2026 06:03:32 +0000 (0:00:01.142) 0:12:21.413 **** 2026-02-18 06:04:08.468782 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.468793 | orchestrator | 2026-02-18 06:04:08.468805 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:04:08.468816 | orchestrator | Wednesday 18 February 2026 06:03:33 +0000 (0:00:01.137) 0:12:22.551 **** 2026-02-18 06:04:08.468827 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.468838 | orchestrator | 2026-02-18 06:04:08.468849 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:04:08.468860 | orchestrator | Wednesday 18 February 2026 06:03:34 +0000 (0:00:00.798) 0:12:23.349 **** 2026-02-18 06:04:08.468871 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:08.468883 | orchestrator | 2026-02-18 06:04:08.468894 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:04:08.468905 | orchestrator | Wednesday 18 February 2026 06:03:36 +0000 (0:00:02.203) 0:12:25.553 **** 2026-02-18 06:04:08.468931 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:08.468946 | orchestrator | 2026-02-18 06:04:08.468960 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:04:08.468973 | orchestrator | Wednesday 18 February 2026 06:03:37 +0000 (0:00:00.762) 0:12:26.316 **** 2026-02-18 06:04:08.468987 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-18 06:04:08.469000 | orchestrator | 2026-02-18 06:04:08.469013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:04:08.469025 | orchestrator | Wednesday 18 February 2026 06:03:38 +0000 (0:00:01.277) 0:12:27.593 **** 2026-02-18 06:04:08.469038 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469050 | orchestrator | 2026-02-18 06:04:08.469063 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:04:08.469076 | orchestrator | Wednesday 18 February 2026 06:03:39 +0000 (0:00:01.256) 0:12:28.850 **** 2026-02-18 06:04:08.469090 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469102 | orchestrator | 2026-02-18 06:04:08.469115 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:04:08.469128 | orchestrator | Wednesday 18 February 2026 06:03:41 +0000 (0:00:01.164) 0:12:30.014 **** 2026-02-18 06:04:08.469141 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469153 | orchestrator | 2026-02-18 06:04:08.469167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:04:08.469181 | orchestrator | Wednesday 18 February 2026 06:03:42 +0000 (0:00:01.146) 0:12:31.161 **** 2026-02-18 06:04:08.469194 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469206 | orchestrator | 2026-02-18 06:04:08.469226 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:04:08.469239 | orchestrator | Wednesday 18 February 2026 06:03:43 +0000 (0:00:01.147) 0:12:32.309 **** 2026-02-18 06:04:08.469252 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469265 | orchestrator | 2026-02-18 06:04:08.469278 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:04:08.469291 | orchestrator | Wednesday 18 February 2026 06:03:44 +0000 (0:00:01.205) 0:12:33.514 **** 2026-02-18 06:04:08.469304 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469316 | orchestrator | 2026-02-18 06:04:08.469327 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:04:08.469338 | orchestrator | Wednesday 18 February 2026 06:03:45 +0000 (0:00:01.161) 0:12:34.676 **** 2026-02-18 06:04:08.469349 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469360 | orchestrator | 2026-02-18 06:04:08.469371 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:04:08.469381 | orchestrator | Wednesday 18 February 2026 06:03:47 +0000 (0:00:01.203) 0:12:35.879 **** 2026-02-18 06:04:08.469393 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469404 | orchestrator | 2026-02-18 06:04:08.469415 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:04:08.469426 | orchestrator | Wednesday 18 February 2026 06:03:48 +0000 (0:00:01.157) 0:12:37.037 **** 2026-02-18 06:04:08.469437 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:08.469448 | orchestrator | 2026-02-18 06:04:08.469459 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:04:08.469470 | orchestrator | Wednesday 18 February 2026 06:03:49 +0000 (0:00:00.862) 0:12:37.899 **** 2026-02-18 06:04:08.469481 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-18 06:04:08.469492 | orchestrator | 2026-02-18 06:04:08.469504 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:04:08.469548 | orchestrator | Wednesday 18 February 2026 06:03:50 +0000 (0:00:01.115) 0:12:39.015 **** 2026-02-18 06:04:08.469560 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-18 06:04:08.469572 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-18 06:04:08.469583 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-18 06:04:08.469594 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-18 06:04:08.469605 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-18 06:04:08.469616 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-18 06:04:08.469627 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-18 06:04:08.469638 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:04:08.469649 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:04:08.469660 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:04:08.469671 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:04:08.469682 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:04:08.469693 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:04:08.469704 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:04:08.469714 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-18 06:04:08.469725 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-18 06:04:08.469736 | orchestrator | 2026-02-18 06:04:08.469747 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:04:08.469758 | orchestrator | Wednesday 18 February 2026 06:03:56 +0000 (0:00:06.561) 0:12:45.577 **** 2026-02-18 06:04:08.469769 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469780 | orchestrator | 2026-02-18 06:04:08.469791 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:04:08.469809 | orchestrator | Wednesday 18 February 2026 06:03:57 +0000 (0:00:00.777) 0:12:46.355 **** 2026-02-18 06:04:08.469820 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469831 | orchestrator | 2026-02-18 06:04:08.469842 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:04:08.469853 | orchestrator | Wednesday 18 February 2026 06:03:58 +0000 (0:00:00.786) 0:12:47.141 **** 2026-02-18 06:04:08.469869 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469881 | orchestrator | 2026-02-18 06:04:08.469892 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:04:08.469903 | orchestrator | Wednesday 18 February 2026 06:03:59 +0000 (0:00:00.789) 0:12:47.931 **** 2026-02-18 06:04:08.469914 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469925 | orchestrator | 2026-02-18 06:04:08.469936 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:04:08.469947 | orchestrator | Wednesday 18 February 2026 06:03:59 +0000 (0:00:00.933) 0:12:48.864 **** 2026-02-18 06:04:08.469958 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.469969 | orchestrator | 2026-02-18 06:04:08.469980 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:04:08.469991 | orchestrator | Wednesday 18 February 2026 06:04:00 +0000 (0:00:00.855) 0:12:49.720 **** 2026-02-18 06:04:08.470002 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470063 | orchestrator | 2026-02-18 06:04:08.470078 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:04:08.470089 | orchestrator | Wednesday 18 February 2026 06:04:01 +0000 (0:00:00.789) 0:12:50.510 **** 2026-02-18 06:04:08.470100 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470111 | orchestrator | 2026-02-18 06:04:08.470122 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:04:08.470133 | orchestrator | Wednesday 18 February 2026 06:04:02 +0000 (0:00:00.834) 0:12:51.345 **** 2026-02-18 06:04:08.470144 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470155 | orchestrator | 2026-02-18 06:04:08.470166 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:04:08.470177 | orchestrator | Wednesday 18 February 2026 06:04:03 +0000 (0:00:00.802) 0:12:52.147 **** 2026-02-18 06:04:08.470188 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470199 | orchestrator | 2026-02-18 06:04:08.470210 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:04:08.470220 | orchestrator | Wednesday 18 February 2026 06:04:04 +0000 (0:00:00.777) 0:12:52.925 **** 2026-02-18 06:04:08.470231 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470242 | orchestrator | 2026-02-18 06:04:08.470253 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:04:08.470264 | orchestrator | Wednesday 18 February 2026 06:04:04 +0000 (0:00:00.798) 0:12:53.724 **** 2026-02-18 06:04:08.470275 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470286 | orchestrator | 2026-02-18 06:04:08.470297 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:04:08.470308 | orchestrator | Wednesday 18 February 2026 06:04:05 +0000 (0:00:00.787) 0:12:54.512 **** 2026-02-18 06:04:08.470319 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470329 | orchestrator | 2026-02-18 06:04:08.470341 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:04:08.470351 | orchestrator | Wednesday 18 February 2026 06:04:06 +0000 (0:00:00.761) 0:12:55.273 **** 2026-02-18 06:04:08.470362 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470373 | orchestrator | 2026-02-18 06:04:08.470384 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:04:08.470395 | orchestrator | Wednesday 18 February 2026 06:04:07 +0000 (0:00:01.281) 0:12:56.555 **** 2026-02-18 06:04:08.470406 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:08.470424 | orchestrator | 2026-02-18 06:04:08.470435 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:04:08.470454 | orchestrator | Wednesday 18 February 2026 06:04:08 +0000 (0:00:00.773) 0:12:57.329 **** 2026-02-18 06:04:56.712772 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.712893 | orchestrator | 2026-02-18 06:04:56.712911 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:04:56.712925 | orchestrator | Wednesday 18 February 2026 06:04:09 +0000 (0:00:00.921) 0:12:58.250 **** 2026-02-18 06:04:56.712937 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.712949 | orchestrator | 2026-02-18 06:04:56.712960 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:04:56.712971 | orchestrator | Wednesday 18 February 2026 06:04:10 +0000 (0:00:00.809) 0:12:59.060 **** 2026-02-18 06:04:56.712982 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.712993 | orchestrator | 2026-02-18 06:04:56.713005 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:04:56.713018 | orchestrator | Wednesday 18 February 2026 06:04:11 +0000 (0:00:00.845) 0:12:59.906 **** 2026-02-18 06:04:56.713029 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713040 | orchestrator | 2026-02-18 06:04:56.713051 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:04:56.713061 | orchestrator | Wednesday 18 February 2026 06:04:11 +0000 (0:00:00.784) 0:13:00.690 **** 2026-02-18 06:04:56.713072 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713083 | orchestrator | 2026-02-18 06:04:56.713094 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:04:56.713105 | orchestrator | Wednesday 18 February 2026 06:04:12 +0000 (0:00:00.823) 0:13:01.513 **** 2026-02-18 06:04:56.713116 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713127 | orchestrator | 2026-02-18 06:04:56.713138 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:04:56.713149 | orchestrator | Wednesday 18 February 2026 06:04:13 +0000 (0:00:00.781) 0:13:02.295 **** 2026-02-18 06:04:56.713159 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713170 | orchestrator | 2026-02-18 06:04:56.713181 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:04:56.713192 | orchestrator | Wednesday 18 February 2026 06:04:14 +0000 (0:00:00.786) 0:13:03.081 **** 2026-02-18 06:04:56.713203 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:04:56.713229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:04:56.713241 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:04:56.713252 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713263 | orchestrator | 2026-02-18 06:04:56.713273 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:04:56.713284 | orchestrator | Wednesday 18 February 2026 06:04:15 +0000 (0:00:01.071) 0:13:04.152 **** 2026-02-18 06:04:56.713295 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:04:56.713308 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:04:56.713321 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:04:56.713333 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713346 | orchestrator | 2026-02-18 06:04:56.713359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:04:56.713371 | orchestrator | Wednesday 18 February 2026 06:04:16 +0000 (0:00:01.148) 0:13:05.301 **** 2026-02-18 06:04:56.713384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:04:56.713397 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:04:56.713410 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:04:56.713422 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713435 | orchestrator | 2026-02-18 06:04:56.713471 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:04:56.713484 | orchestrator | Wednesday 18 February 2026 06:04:17 +0000 (0:00:01.057) 0:13:06.359 **** 2026-02-18 06:04:56.713497 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713510 | orchestrator | 2026-02-18 06:04:56.713523 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:04:56.713536 | orchestrator | Wednesday 18 February 2026 06:04:18 +0000 (0:00:00.802) 0:13:07.162 **** 2026-02-18 06:04:56.713549 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-18 06:04:56.713561 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713573 | orchestrator | 2026-02-18 06:04:56.713606 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:04:56.713619 | orchestrator | Wednesday 18 February 2026 06:04:19 +0000 (0:00:01.019) 0:13:08.182 **** 2026-02-18 06:04:56.713632 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.713645 | orchestrator | 2026-02-18 06:04:56.713657 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:04:56.713670 | orchestrator | Wednesday 18 February 2026 06:04:20 +0000 (0:00:01.517) 0:13:09.700 **** 2026-02-18 06:04:56.713680 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.713691 | orchestrator | 2026-02-18 06:04:56.713702 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-18 06:04:56.713712 | orchestrator | Wednesday 18 February 2026 06:04:21 +0000 (0:00:00.811) 0:13:10.511 **** 2026-02-18 06:04:56.713723 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-18 06:04:56.713735 | orchestrator | 2026-02-18 06:04:56.713746 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-18 06:04:56.713756 | orchestrator | Wednesday 18 February 2026 06:04:22 +0000 (0:00:01.175) 0:13:11.687 **** 2026-02-18 06:04:56.713767 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:04:56.713778 | orchestrator | 2026-02-18 06:04:56.713789 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-18 06:04:56.713799 | orchestrator | Wednesday 18 February 2026 06:04:25 +0000 (0:00:03.185) 0:13:14.872 **** 2026-02-18 06:04:56.713810 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.713821 | orchestrator | 2026-02-18 06:04:56.713832 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-18 06:04:56.713860 | orchestrator | Wednesday 18 February 2026 06:04:27 +0000 (0:00:01.203) 0:13:16.076 **** 2026-02-18 06:04:56.713871 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.713882 | orchestrator | 2026-02-18 06:04:56.713893 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-18 06:04:56.713904 | orchestrator | Wednesday 18 February 2026 06:04:28 +0000 (0:00:01.159) 0:13:17.236 **** 2026-02-18 06:04:56.713915 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.713926 | orchestrator | 2026-02-18 06:04:56.713945 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-18 06:04:56.713964 | orchestrator | Wednesday 18 February 2026 06:04:29 +0000 (0:00:01.209) 0:13:18.446 **** 2026-02-18 06:04:56.713983 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:04:56.714000 | orchestrator | 2026-02-18 06:04:56.714092 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-18 06:04:56.714117 | orchestrator | Wednesday 18 February 2026 06:04:31 +0000 (0:00:02.053) 0:13:20.499 **** 2026-02-18 06:04:56.714135 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.714153 | orchestrator | 2026-02-18 06:04:56.714173 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-18 06:04:56.714190 | orchestrator | Wednesday 18 February 2026 06:04:33 +0000 (0:00:01.629) 0:13:22.129 **** 2026-02-18 06:04:56.714209 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.714226 | orchestrator | 2026-02-18 06:04:56.714237 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-18 06:04:56.714248 | orchestrator | Wednesday 18 February 2026 06:04:34 +0000 (0:00:01.564) 0:13:23.694 **** 2026-02-18 06:04:56.714272 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.714282 | orchestrator | 2026-02-18 06:04:56.714293 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-18 06:04:56.714303 | orchestrator | Wednesday 18 February 2026 06:04:36 +0000 (0:00:01.537) 0:13:25.232 **** 2026-02-18 06:04:56.714314 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:04:56.714324 | orchestrator | 2026-02-18 06:04:56.714335 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-18 06:04:56.714346 | orchestrator | Wednesday 18 February 2026 06:04:37 +0000 (0:00:01.583) 0:13:26.815 **** 2026-02-18 06:04:56.714356 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:04:56.714367 | orchestrator | 2026-02-18 06:04:56.714386 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-18 06:04:56.714397 | orchestrator | Wednesday 18 February 2026 06:04:39 +0000 (0:00:01.600) 0:13:28.415 **** 2026-02-18 06:04:56.714407 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:04:56.714418 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-18 06:04:56.714429 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 06:04:56.714440 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-18 06:04:56.714451 | orchestrator | 2026-02-18 06:04:56.714461 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-18 06:04:56.714472 | orchestrator | Wednesday 18 February 2026 06:04:43 +0000 (0:00:03.920) 0:13:32.335 **** 2026-02-18 06:04:56.714483 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:04:56.714494 | orchestrator | 2026-02-18 06:04:56.714504 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-18 06:04:56.714515 | orchestrator | Wednesday 18 February 2026 06:04:45 +0000 (0:00:02.146) 0:13:34.482 **** 2026-02-18 06:04:56.714526 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.714536 | orchestrator | 2026-02-18 06:04:56.714547 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-18 06:04:56.714558 | orchestrator | Wednesday 18 February 2026 06:04:46 +0000 (0:00:01.166) 0:13:35.649 **** 2026-02-18 06:04:56.714568 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.714579 | orchestrator | 2026-02-18 06:04:56.714639 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-18 06:04:56.714651 | orchestrator | Wednesday 18 February 2026 06:04:47 +0000 (0:00:01.153) 0:13:36.803 **** 2026-02-18 06:04:56.714661 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.714672 | orchestrator | 2026-02-18 06:04:56.714683 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-18 06:04:56.714694 | orchestrator | Wednesday 18 February 2026 06:04:49 +0000 (0:00:01.755) 0:13:38.559 **** 2026-02-18 06:04:56.714704 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:04:56.714715 | orchestrator | 2026-02-18 06:04:56.714726 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-18 06:04:56.714736 | orchestrator | Wednesday 18 February 2026 06:04:51 +0000 (0:00:01.542) 0:13:40.101 **** 2026-02-18 06:04:56.714747 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.714758 | orchestrator | 2026-02-18 06:04:56.714769 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-18 06:04:56.714779 | orchestrator | Wednesday 18 February 2026 06:04:52 +0000 (0:00:00.786) 0:13:40.887 **** 2026-02-18 06:04:56.714790 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-18 06:04:56.714801 | orchestrator | 2026-02-18 06:04:56.714812 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-18 06:04:56.714822 | orchestrator | Wednesday 18 February 2026 06:04:53 +0000 (0:00:01.148) 0:13:42.036 **** 2026-02-18 06:04:56.714833 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.714843 | orchestrator | 2026-02-18 06:04:56.714854 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-18 06:04:56.714872 | orchestrator | Wednesday 18 February 2026 06:04:54 +0000 (0:00:01.198) 0:13:43.235 **** 2026-02-18 06:04:56.714882 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:04:56.714893 | orchestrator | 2026-02-18 06:04:56.714904 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-18 06:04:56.714915 | orchestrator | Wednesday 18 February 2026 06:04:55 +0000 (0:00:01.197) 0:13:44.432 **** 2026-02-18 06:04:56.714925 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-18 06:04:56.714936 | orchestrator | 2026-02-18 06:04:56.714959 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-18 06:06:05.015672 | orchestrator | Wednesday 18 February 2026 06:04:56 +0000 (0:00:01.136) 0:13:45.569 **** 2026-02-18 06:06:05.015846 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:06:05.015864 | orchestrator | 2026-02-18 06:06:05.015877 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-18 06:06:05.015888 | orchestrator | Wednesday 18 February 2026 06:04:59 +0000 (0:00:02.390) 0:13:47.960 **** 2026-02-18 06:06:05.015898 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:06:05.015908 | orchestrator | 2026-02-18 06:06:05.015918 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-18 06:06:05.015929 | orchestrator | Wednesday 18 February 2026 06:05:01 +0000 (0:00:01.932) 0:13:49.892 **** 2026-02-18 06:06:05.015939 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:06:05.015950 | orchestrator | 2026-02-18 06:06:05.015960 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-18 06:06:05.015970 | orchestrator | Wednesday 18 February 2026 06:05:03 +0000 (0:00:02.458) 0:13:52.351 **** 2026-02-18 06:06:05.015980 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:06:05.015991 | orchestrator | 2026-02-18 06:06:05.016001 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-18 06:06:05.016016 | orchestrator | Wednesday 18 February 2026 06:05:06 +0000 (0:00:02.938) 0:13:55.289 **** 2026-02-18 06:06:05.016032 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-18 06:06:05.016050 | orchestrator | 2026-02-18 06:06:05.016066 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-18 06:06:05.016083 | orchestrator | Wednesday 18 February 2026 06:05:07 +0000 (0:00:01.139) 0:13:56.429 **** 2026-02-18 06:06:05.016100 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-18 06:06:05.016111 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:06:05.016121 | orchestrator | 2026-02-18 06:06:05.016131 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-18 06:06:05.016140 | orchestrator | Wednesday 18 February 2026 06:05:30 +0000 (0:00:22.931) 0:14:19.361 **** 2026-02-18 06:06:05.016150 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:06:05.016159 | orchestrator | 2026-02-18 06:06:05.016169 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-18 06:06:05.016195 | orchestrator | Wednesday 18 February 2026 06:05:33 +0000 (0:00:02.635) 0:14:21.997 **** 2026-02-18 06:06:05.016205 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:06:05.016218 | orchestrator | 2026-02-18 06:06:05.016235 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-18 06:06:05.016252 | orchestrator | Wednesday 18 February 2026 06:05:33 +0000 (0:00:00.809) 0:14:22.806 **** 2026-02-18 06:06:05.016276 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-18 06:06:05.016298 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-18 06:06:05.016336 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-18 06:06:05.016349 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-18 06:06:05.016363 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-18 06:06:05.016405 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}])  2026-02-18 06:06:05.016420 | orchestrator | 2026-02-18 06:06:05.016432 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-18 06:06:05.016444 | orchestrator | Wednesday 18 February 2026 06:05:43 +0000 (0:00:09.703) 0:14:32.510 **** 2026-02-18 06:06:05.016461 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:06:05.016479 | orchestrator | 2026-02-18 06:06:05.016495 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:06:05.016512 | orchestrator | Wednesday 18 February 2026 06:05:45 +0000 (0:00:02.172) 0:14:34.683 **** 2026-02-18 06:06:05.016529 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:06:05.016548 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-18 06:06:05.016566 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-18 06:06:05.016649 | orchestrator | 2026-02-18 06:06:05.016662 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:06:05.016672 | orchestrator | Wednesday 18 February 2026 06:05:47 +0000 (0:00:01.569) 0:14:36.252 **** 2026-02-18 06:06:05.016682 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 06:06:05.016755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 06:06:05.016769 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 06:06:05.016779 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:06:05.016789 | orchestrator | 2026-02-18 06:06:05.016799 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-18 06:06:05.016808 | orchestrator | Wednesday 18 February 2026 06:05:48 +0000 (0:00:01.016) 0:14:37.269 **** 2026-02-18 06:06:05.016818 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:06:05.016828 | orchestrator | 2026-02-18 06:06:05.016838 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-18 06:06:05.016848 | orchestrator | Wednesday 18 February 2026 06:05:49 +0000 (0:00:00.833) 0:14:38.102 **** 2026-02-18 06:06:05.016857 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:06:05.016867 | orchestrator | 2026-02-18 06:06:05.016888 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-18 06:06:05.016898 | orchestrator | 2026-02-18 06:06:05.016916 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-18 06:06:05.016926 | orchestrator | Wednesday 18 February 2026 06:05:51 +0000 (0:00:02.190) 0:14:40.293 **** 2026-02-18 06:06:05.016935 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.016945 | orchestrator | 2026-02-18 06:06:05.016955 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-18 06:06:05.016964 | orchestrator | Wednesday 18 February 2026 06:05:52 +0000 (0:00:01.144) 0:14:41.437 **** 2026-02-18 06:06:05.016974 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.016984 | orchestrator | 2026-02-18 06:06:05.016993 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-18 06:06:05.017009 | orchestrator | Wednesday 18 February 2026 06:05:53 +0000 (0:00:00.774) 0:14:42.212 **** 2026-02-18 06:06:05.017026 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:05.017042 | orchestrator | 2026-02-18 06:06:05.017056 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-18 06:06:05.017066 | orchestrator | Wednesday 18 February 2026 06:05:54 +0000 (0:00:00.780) 0:14:42.992 **** 2026-02-18 06:06:05.017122 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.017134 | orchestrator | 2026-02-18 06:06:05.017151 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:06:05.017168 | orchestrator | Wednesday 18 February 2026 06:05:54 +0000 (0:00:00.799) 0:14:43.792 **** 2026-02-18 06:06:05.017180 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-18 06:06:05.017190 | orchestrator | 2026-02-18 06:06:05.017199 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:06:05.017209 | orchestrator | Wednesday 18 February 2026 06:05:56 +0000 (0:00:01.286) 0:14:45.078 **** 2026-02-18 06:06:05.017219 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.017228 | orchestrator | 2026-02-18 06:06:05.017238 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:06:05.017248 | orchestrator | Wednesday 18 February 2026 06:05:57 +0000 (0:00:01.477) 0:14:46.556 **** 2026-02-18 06:06:05.017257 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.017267 | orchestrator | 2026-02-18 06:06:05.017277 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:06:05.017287 | orchestrator | Wednesday 18 February 2026 06:05:58 +0000 (0:00:01.129) 0:14:47.686 **** 2026-02-18 06:06:05.017296 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.017306 | orchestrator | 2026-02-18 06:06:05.017345 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:06:05.017356 | orchestrator | Wednesday 18 February 2026 06:06:00 +0000 (0:00:01.475) 0:14:49.162 **** 2026-02-18 06:06:05.017366 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.017376 | orchestrator | 2026-02-18 06:06:05.017386 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:06:05.017395 | orchestrator | Wednesday 18 February 2026 06:06:01 +0000 (0:00:01.207) 0:14:50.370 **** 2026-02-18 06:06:05.017405 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.017415 | orchestrator | 2026-02-18 06:06:05.017424 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:06:05.017434 | orchestrator | Wednesday 18 February 2026 06:06:02 +0000 (0:00:01.164) 0:14:51.534 **** 2026-02-18 06:06:05.017444 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:05.017453 | orchestrator | 2026-02-18 06:06:05.017463 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:06:05.017472 | orchestrator | Wednesday 18 February 2026 06:06:03 +0000 (0:00:01.170) 0:14:52.705 **** 2026-02-18 06:06:05.017482 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:05.017492 | orchestrator | 2026-02-18 06:06:05.017502 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:06:05.017553 | orchestrator | Wednesday 18 February 2026 06:06:04 +0000 (0:00:01.170) 0:14:53.876 **** 2026-02-18 06:06:31.002231 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:31.002353 | orchestrator | 2026-02-18 06:06:31.002392 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:06:31.002417 | orchestrator | Wednesday 18 February 2026 06:06:06 +0000 (0:00:01.146) 0:14:55.022 **** 2026-02-18 06:06:31.002429 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:06:31.002441 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:06:31.002453 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:06:31.002464 | orchestrator | 2026-02-18 06:06:31.002475 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:06:31.002486 | orchestrator | Wednesday 18 February 2026 06:06:08 +0000 (0:00:02.054) 0:14:57.077 **** 2026-02-18 06:06:31.002497 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:31.002509 | orchestrator | 2026-02-18 06:06:31.002520 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:06:31.002531 | orchestrator | Wednesday 18 February 2026 06:06:09 +0000 (0:00:01.293) 0:14:58.371 **** 2026-02-18 06:06:31.002542 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:06:31.002553 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:06:31.002563 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:06:31.002574 | orchestrator | 2026-02-18 06:06:31.002585 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:06:31.002596 | orchestrator | Wednesday 18 February 2026 06:06:12 +0000 (0:00:03.272) 0:15:01.643 **** 2026-02-18 06:06:31.002608 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 06:06:31.002619 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 06:06:31.002630 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 06:06:31.002641 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.002652 | orchestrator | 2026-02-18 06:06:31.002663 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:06:31.002691 | orchestrator | Wednesday 18 February 2026 06:06:14 +0000 (0:00:01.817) 0:15:03.461 **** 2026-02-18 06:06:31.002706 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:06:31.002721 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:06:31.002756 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:06:31.002772 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.002785 | orchestrator | 2026-02-18 06:06:31.002799 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:06:31.002812 | orchestrator | Wednesday 18 February 2026 06:06:16 +0000 (0:00:02.059) 0:15:05.521 **** 2026-02-18 06:06:31.002828 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:31.002845 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:31.002882 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:31.002896 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.002909 | orchestrator | 2026-02-18 06:06:31.002922 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:06:31.002935 | orchestrator | Wednesday 18 February 2026 06:06:17 +0000 (0:00:01.227) 0:15:06.748 **** 2026-02-18 06:06:31.002969 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:06:10.054753', 'end': '2026-02-18 06:06:10.110379', 'delta': '0:00:00.055626', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:06:31.002986 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:06:10.989315', 'end': '2026-02-18 06:06:11.037347', 'delta': '0:00:00.048032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:06:31.003006 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '11fb53bc1513', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:06:11.519352', 'end': '2026-02-18 06:06:11.567323', 'delta': '0:00:00.047971', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['11fb53bc1513'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:06:31.003019 | orchestrator | 2026-02-18 06:06:31.003032 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:06:31.003045 | orchestrator | Wednesday 18 February 2026 06:06:19 +0000 (0:00:01.380) 0:15:08.128 **** 2026-02-18 06:06:31.003057 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:31.003071 | orchestrator | 2026-02-18 06:06:31.003084 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:06:31.003097 | orchestrator | Wednesday 18 February 2026 06:06:20 +0000 (0:00:01.336) 0:15:09.465 **** 2026-02-18 06:06:31.003110 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.003135 | orchestrator | 2026-02-18 06:06:31.003146 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:06:31.003157 | orchestrator | Wednesday 18 February 2026 06:06:21 +0000 (0:00:01.269) 0:15:10.734 **** 2026-02-18 06:06:31.003168 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:31.003179 | orchestrator | 2026-02-18 06:06:31.003189 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:06:31.003200 | orchestrator | Wednesday 18 February 2026 06:06:23 +0000 (0:00:01.180) 0:15:11.915 **** 2026-02-18 06:06:31.003211 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:06:31.003222 | orchestrator | 2026-02-18 06:06:31.003233 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:06:31.003244 | orchestrator | Wednesday 18 February 2026 06:06:25 +0000 (0:00:02.009) 0:15:13.924 **** 2026-02-18 06:06:31.003254 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:31.003265 | orchestrator | 2026-02-18 06:06:31.003276 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:06:31.003286 | orchestrator | Wednesday 18 February 2026 06:06:26 +0000 (0:00:01.155) 0:15:15.080 **** 2026-02-18 06:06:31.003297 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.003308 | orchestrator | 2026-02-18 06:06:31.003318 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:06:31.003329 | orchestrator | Wednesday 18 February 2026 06:06:27 +0000 (0:00:01.150) 0:15:16.231 **** 2026-02-18 06:06:31.003340 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.003351 | orchestrator | 2026-02-18 06:06:31.003361 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:06:31.003372 | orchestrator | Wednesday 18 February 2026 06:06:28 +0000 (0:00:01.230) 0:15:17.462 **** 2026-02-18 06:06:31.003383 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.003394 | orchestrator | 2026-02-18 06:06:31.003404 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:06:31.003415 | orchestrator | Wednesday 18 February 2026 06:06:29 +0000 (0:00:01.210) 0:15:18.672 **** 2026-02-18 06:06:31.003426 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:31.003437 | orchestrator | 2026-02-18 06:06:31.003448 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:06:31.003466 | orchestrator | Wednesday 18 February 2026 06:06:30 +0000 (0:00:01.190) 0:15:19.863 **** 2026-02-18 06:06:39.125254 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:39.125366 | orchestrator | 2026-02-18 06:06:39.125383 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:06:39.125397 | orchestrator | Wednesday 18 February 2026 06:06:32 +0000 (0:00:01.127) 0:15:20.990 **** 2026-02-18 06:06:39.125409 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:39.125420 | orchestrator | 2026-02-18 06:06:39.125432 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:06:39.125443 | orchestrator | Wednesday 18 February 2026 06:06:33 +0000 (0:00:01.180) 0:15:22.171 **** 2026-02-18 06:06:39.125454 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:39.125465 | orchestrator | 2026-02-18 06:06:39.125476 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:06:39.125487 | orchestrator | Wednesday 18 February 2026 06:06:34 +0000 (0:00:01.128) 0:15:23.299 **** 2026-02-18 06:06:39.125498 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:39.125509 | orchestrator | 2026-02-18 06:06:39.125520 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:06:39.125532 | orchestrator | Wednesday 18 February 2026 06:06:35 +0000 (0:00:01.112) 0:15:24.412 **** 2026-02-18 06:06:39.125543 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:39.125554 | orchestrator | 2026-02-18 06:06:39.125565 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:06:39.125575 | orchestrator | Wednesday 18 February 2026 06:06:36 +0000 (0:00:01.100) 0:15:25.512 **** 2026-02-18 06:06:39.125614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:06:39.125684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd638dc9f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:06:39.125794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:06:39.125817 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:39.125828 | orchestrator | 2026-02-18 06:06:39.125839 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:06:39.125850 | orchestrator | Wednesday 18 February 2026 06:06:37 +0000 (0:00:01.258) 0:15:26.771 **** 2026-02-18 06:06:39.125862 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:39.125885 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875383 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875565 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875598 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875610 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875650 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd638dc9f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875673 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:06:46.875697 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:46.875711 | orchestrator | 2026-02-18 06:06:46.875723 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:06:46.875736 | orchestrator | Wednesday 18 February 2026 06:06:39 +0000 (0:00:01.227) 0:15:27.998 **** 2026-02-18 06:06:46.875747 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:46.875839 | orchestrator | 2026-02-18 06:06:46.875851 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:06:46.875862 | orchestrator | Wednesday 18 February 2026 06:06:40 +0000 (0:00:01.563) 0:15:29.561 **** 2026-02-18 06:06:46.875874 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:46.875884 | orchestrator | 2026-02-18 06:06:46.875895 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:06:46.875906 | orchestrator | Wednesday 18 February 2026 06:06:41 +0000 (0:00:01.162) 0:15:30.724 **** 2026-02-18 06:06:46.875919 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:06:46.875931 | orchestrator | 2026-02-18 06:06:46.875943 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:06:46.875956 | orchestrator | Wednesday 18 February 2026 06:06:43 +0000 (0:00:01.504) 0:15:32.228 **** 2026-02-18 06:06:46.875969 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:46.875990 | orchestrator | 2026-02-18 06:06:46.876003 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:06:46.876016 | orchestrator | Wednesday 18 February 2026 06:06:44 +0000 (0:00:01.165) 0:15:33.394 **** 2026-02-18 06:06:46.876028 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:46.876041 | orchestrator | 2026-02-18 06:06:46.876068 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:06:46.876081 | orchestrator | Wednesday 18 February 2026 06:06:45 +0000 (0:00:01.222) 0:15:34.616 **** 2026-02-18 06:06:46.876094 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:06:46.876116 | orchestrator | 2026-02-18 06:06:46.876128 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:06:46.876147 | orchestrator | Wednesday 18 February 2026 06:06:46 +0000 (0:00:01.125) 0:15:35.742 **** 2026-02-18 06:07:26.591014 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-18 06:07:26.591134 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-18 06:07:26.591150 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:07:26.591162 | orchestrator | 2026-02-18 06:07:26.591175 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:07:26.591188 | orchestrator | Wednesday 18 February 2026 06:06:48 +0000 (0:00:02.041) 0:15:37.784 **** 2026-02-18 06:07:26.591200 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 06:07:26.591212 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 06:07:26.591223 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 06:07:26.591242 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.591261 | orchestrator | 2026-02-18 06:07:26.591279 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:07:26.591296 | orchestrator | Wednesday 18 February 2026 06:06:50 +0000 (0:00:01.272) 0:15:39.057 **** 2026-02-18 06:07:26.591315 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.591334 | orchestrator | 2026-02-18 06:07:26.591354 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:07:26.591373 | orchestrator | Wednesday 18 February 2026 06:06:51 +0000 (0:00:01.160) 0:15:40.217 **** 2026-02-18 06:07:26.591392 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:07:26.591404 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:07:26.591433 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:07:26.591444 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:07:26.591455 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:07:26.591466 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:07:26.591477 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:07:26.591487 | orchestrator | 2026-02-18 06:07:26.591498 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:07:26.591509 | orchestrator | Wednesday 18 February 2026 06:06:53 +0000 (0:00:01.819) 0:15:42.036 **** 2026-02-18 06:07:26.591519 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:07:26.591530 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:07:26.591543 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:07:26.591556 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:07:26.591568 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:07:26.591581 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:07:26.591593 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:07:26.591636 | orchestrator | 2026-02-18 06:07:26.591655 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-18 06:07:26.591673 | orchestrator | Wednesday 18 February 2026 06:06:55 +0000 (0:00:02.285) 0:15:44.322 **** 2026-02-18 06:07:26.591692 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.591710 | orchestrator | 2026-02-18 06:07:26.591731 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-18 06:07:26.591751 | orchestrator | Wednesday 18 February 2026 06:06:56 +0000 (0:00:00.888) 0:15:45.211 **** 2026-02-18 06:07:26.591768 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.591781 | orchestrator | 2026-02-18 06:07:26.591794 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-18 06:07:26.591807 | orchestrator | Wednesday 18 February 2026 06:06:57 +0000 (0:00:00.904) 0:15:46.116 **** 2026-02-18 06:07:26.591863 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.591877 | orchestrator | 2026-02-18 06:07:26.591890 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-18 06:07:26.591902 | orchestrator | Wednesday 18 February 2026 06:06:58 +0000 (0:00:00.777) 0:15:46.894 **** 2026-02-18 06:07:26.591913 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.591923 | orchestrator | 2026-02-18 06:07:26.591934 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-18 06:07:26.591945 | orchestrator | Wednesday 18 February 2026 06:06:58 +0000 (0:00:00.891) 0:15:47.786 **** 2026-02-18 06:07:26.591956 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.591967 | orchestrator | 2026-02-18 06:07:26.591979 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-18 06:07:26.591998 | orchestrator | Wednesday 18 February 2026 06:06:59 +0000 (0:00:00.802) 0:15:48.588 **** 2026-02-18 06:07:26.592017 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 06:07:26.592035 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 06:07:26.592053 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 06:07:26.592071 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.592091 | orchestrator | 2026-02-18 06:07:26.592110 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-18 06:07:26.592128 | orchestrator | Wednesday 18 February 2026 06:07:00 +0000 (0:00:01.137) 0:15:49.726 **** 2026-02-18 06:07:26.592140 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-18 06:07:26.592150 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-18 06:07:26.592179 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-18 06:07:26.592191 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-18 06:07:26.592202 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-18 06:07:26.592212 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-18 06:07:26.592223 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.592234 | orchestrator | 2026-02-18 06:07:26.592245 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-18 06:07:26.592256 | orchestrator | Wednesday 18 February 2026 06:07:02 +0000 (0:00:01.650) 0:15:51.377 **** 2026-02-18 06:07:26.592267 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:07:26.592278 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:07:26.592289 | orchestrator | 2026-02-18 06:07:26.592300 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-18 06:07:26.592311 | orchestrator | Wednesday 18 February 2026 06:07:05 +0000 (0:00:03.186) 0:15:54.564 **** 2026-02-18 06:07:26.592322 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:07:26.592333 | orchestrator | 2026-02-18 06:07:26.592344 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:07:26.592375 | orchestrator | Wednesday 18 February 2026 06:07:07 +0000 (0:00:02.095) 0:15:56.659 **** 2026-02-18 06:07:26.592394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-18 06:07:26.592413 | orchestrator | 2026-02-18 06:07:26.592442 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:07:26.592461 | orchestrator | Wednesday 18 February 2026 06:07:09 +0000 (0:00:01.293) 0:15:57.953 **** 2026-02-18 06:07:26.592480 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-18 06:07:26.592500 | orchestrator | 2026-02-18 06:07:26.592518 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:07:26.592531 | orchestrator | Wednesday 18 February 2026 06:07:10 +0000 (0:00:01.164) 0:15:59.118 **** 2026-02-18 06:07:26.592542 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:07:26.592553 | orchestrator | 2026-02-18 06:07:26.592564 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:07:26.592574 | orchestrator | Wednesday 18 February 2026 06:07:11 +0000 (0:00:01.592) 0:16:00.711 **** 2026-02-18 06:07:26.592585 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.592596 | orchestrator | 2026-02-18 06:07:26.592607 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:07:26.592618 | orchestrator | Wednesday 18 February 2026 06:07:12 +0000 (0:00:01.152) 0:16:01.864 **** 2026-02-18 06:07:26.592628 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.592639 | orchestrator | 2026-02-18 06:07:26.592650 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:07:26.592660 | orchestrator | Wednesday 18 February 2026 06:07:14 +0000 (0:00:01.185) 0:16:03.050 **** 2026-02-18 06:07:26.592671 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.592682 | orchestrator | 2026-02-18 06:07:26.592693 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:07:26.592703 | orchestrator | Wednesday 18 February 2026 06:07:15 +0000 (0:00:01.206) 0:16:04.256 **** 2026-02-18 06:07:26.592714 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:07:26.592725 | orchestrator | 2026-02-18 06:07:26.592743 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:07:26.592761 | orchestrator | Wednesday 18 February 2026 06:07:16 +0000 (0:00:01.589) 0:16:05.846 **** 2026-02-18 06:07:26.592778 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.592797 | orchestrator | 2026-02-18 06:07:26.592843 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:07:26.592861 | orchestrator | Wednesday 18 February 2026 06:07:18 +0000 (0:00:01.211) 0:16:07.058 **** 2026-02-18 06:07:26.592880 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.592892 | orchestrator | 2026-02-18 06:07:26.592903 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:07:26.592913 | orchestrator | Wednesday 18 February 2026 06:07:19 +0000 (0:00:01.150) 0:16:08.209 **** 2026-02-18 06:07:26.592924 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:07:26.592935 | orchestrator | 2026-02-18 06:07:26.592945 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:07:26.592956 | orchestrator | Wednesday 18 February 2026 06:07:20 +0000 (0:00:01.552) 0:16:09.761 **** 2026-02-18 06:07:26.592966 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:07:26.592977 | orchestrator | 2026-02-18 06:07:26.592988 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:07:26.592998 | orchestrator | Wednesday 18 February 2026 06:07:22 +0000 (0:00:01.595) 0:16:11.357 **** 2026-02-18 06:07:26.593009 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.593019 | orchestrator | 2026-02-18 06:07:26.593030 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:07:26.593046 | orchestrator | Wednesday 18 February 2026 06:07:23 +0000 (0:00:00.810) 0:16:12.168 **** 2026-02-18 06:07:26.593064 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:07:26.593127 | orchestrator | 2026-02-18 06:07:26.593146 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:07:26.593165 | orchestrator | Wednesday 18 February 2026 06:07:24 +0000 (0:00:00.807) 0:16:12.976 **** 2026-02-18 06:07:26.593182 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.593200 | orchestrator | 2026-02-18 06:07:26.593217 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:07:26.593235 | orchestrator | Wednesday 18 February 2026 06:07:24 +0000 (0:00:00.795) 0:16:13.772 **** 2026-02-18 06:07:26.593254 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:07:26.593273 | orchestrator | 2026-02-18 06:07:26.593292 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:07:26.593311 | orchestrator | Wednesday 18 February 2026 06:07:25 +0000 (0:00:00.811) 0:16:14.583 **** 2026-02-18 06:07:26.593343 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.917369 | orchestrator | 2026-02-18 06:08:06.917488 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:08:06.917506 | orchestrator | Wednesday 18 February 2026 06:07:26 +0000 (0:00:00.875) 0:16:15.459 **** 2026-02-18 06:08:06.917518 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.917531 | orchestrator | 2026-02-18 06:08:06.917542 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:08:06.917553 | orchestrator | Wednesday 18 February 2026 06:07:27 +0000 (0:00:00.783) 0:16:16.242 **** 2026-02-18 06:08:06.917565 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.917576 | orchestrator | 2026-02-18 06:08:06.917587 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:08:06.917598 | orchestrator | Wednesday 18 February 2026 06:07:28 +0000 (0:00:00.772) 0:16:17.014 **** 2026-02-18 06:08:06.917609 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.917622 | orchestrator | 2026-02-18 06:08:06.917634 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:08:06.917645 | orchestrator | Wednesday 18 February 2026 06:07:28 +0000 (0:00:00.794) 0:16:17.809 **** 2026-02-18 06:08:06.917656 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.917667 | orchestrator | 2026-02-18 06:08:06.917678 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:08:06.917707 | orchestrator | Wednesday 18 February 2026 06:07:29 +0000 (0:00:00.809) 0:16:18.618 **** 2026-02-18 06:08:06.917718 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.917741 | orchestrator | 2026-02-18 06:08:06.917752 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:08:06.917780 | orchestrator | Wednesday 18 February 2026 06:07:30 +0000 (0:00:00.815) 0:16:19.434 **** 2026-02-18 06:08:06.917793 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.917804 | orchestrator | 2026-02-18 06:08:06.917815 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:08:06.917826 | orchestrator | Wednesday 18 February 2026 06:07:31 +0000 (0:00:00.770) 0:16:20.204 **** 2026-02-18 06:08:06.917837 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.917849 | orchestrator | 2026-02-18 06:08:06.917860 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:08:06.917893 | orchestrator | Wednesday 18 February 2026 06:07:32 +0000 (0:00:00.786) 0:16:20.990 **** 2026-02-18 06:08:06.917906 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.917919 | orchestrator | 2026-02-18 06:08:06.917933 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:08:06.917946 | orchestrator | Wednesday 18 February 2026 06:07:32 +0000 (0:00:00.790) 0:16:21.781 **** 2026-02-18 06:08:06.917959 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.917972 | orchestrator | 2026-02-18 06:08:06.917985 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:08:06.917998 | orchestrator | Wednesday 18 February 2026 06:07:33 +0000 (0:00:00.791) 0:16:22.572 **** 2026-02-18 06:08:06.918011 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918112 | orchestrator | 2026-02-18 06:08:06.918127 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:08:06.918149 | orchestrator | Wednesday 18 February 2026 06:07:34 +0000 (0:00:00.794) 0:16:23.366 **** 2026-02-18 06:08:06.918162 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918175 | orchestrator | 2026-02-18 06:08:06.918188 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:08:06.918201 | orchestrator | Wednesday 18 February 2026 06:07:35 +0000 (0:00:00.770) 0:16:24.138 **** 2026-02-18 06:08:06.918212 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918223 | orchestrator | 2026-02-18 06:08:06.918234 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:08:06.918245 | orchestrator | Wednesday 18 February 2026 06:07:36 +0000 (0:00:00.762) 0:16:24.900 **** 2026-02-18 06:08:06.918256 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918266 | orchestrator | 2026-02-18 06:08:06.918277 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:08:06.918288 | orchestrator | Wednesday 18 February 2026 06:07:36 +0000 (0:00:00.832) 0:16:25.733 **** 2026-02-18 06:08:06.918298 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918309 | orchestrator | 2026-02-18 06:08:06.918320 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:08:06.918331 | orchestrator | Wednesday 18 February 2026 06:07:37 +0000 (0:00:00.777) 0:16:26.510 **** 2026-02-18 06:08:06.918342 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918353 | orchestrator | 2026-02-18 06:08:06.918363 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:08:06.918374 | orchestrator | Wednesday 18 February 2026 06:07:38 +0000 (0:00:00.764) 0:16:27.274 **** 2026-02-18 06:08:06.918385 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918396 | orchestrator | 2026-02-18 06:08:06.918407 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:08:06.918417 | orchestrator | Wednesday 18 February 2026 06:07:39 +0000 (0:00:00.794) 0:16:28.069 **** 2026-02-18 06:08:06.918428 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918439 | orchestrator | 2026-02-18 06:08:06.918450 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:08:06.918460 | orchestrator | Wednesday 18 February 2026 06:07:40 +0000 (0:00:00.823) 0:16:28.893 **** 2026-02-18 06:08:06.918471 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.918482 | orchestrator | 2026-02-18 06:08:06.918493 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:08:06.918503 | orchestrator | Wednesday 18 February 2026 06:07:41 +0000 (0:00:01.571) 0:16:30.464 **** 2026-02-18 06:08:06.918514 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.918525 | orchestrator | 2026-02-18 06:08:06.918535 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:08:06.918546 | orchestrator | Wednesday 18 February 2026 06:07:43 +0000 (0:00:01.985) 0:16:32.449 **** 2026-02-18 06:08:06.918557 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-18 06:08:06.918569 | orchestrator | 2026-02-18 06:08:06.918599 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:08:06.918610 | orchestrator | Wednesday 18 February 2026 06:07:44 +0000 (0:00:01.326) 0:16:33.776 **** 2026-02-18 06:08:06.918621 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918632 | orchestrator | 2026-02-18 06:08:06.918643 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:08:06.918654 | orchestrator | Wednesday 18 February 2026 06:07:46 +0000 (0:00:01.132) 0:16:34.909 **** 2026-02-18 06:08:06.918665 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918676 | orchestrator | 2026-02-18 06:08:06.918687 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:08:06.918698 | orchestrator | Wednesday 18 February 2026 06:07:47 +0000 (0:00:01.113) 0:16:36.022 **** 2026-02-18 06:08:06.918716 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:08:06.918727 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:08:06.918738 | orchestrator | 2026-02-18 06:08:06.918749 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:08:06.918760 | orchestrator | Wednesday 18 February 2026 06:07:48 +0000 (0:00:01.833) 0:16:37.855 **** 2026-02-18 06:08:06.918771 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.918782 | orchestrator | 2026-02-18 06:08:06.918793 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:08:06.918804 | orchestrator | Wednesday 18 February 2026 06:07:50 +0000 (0:00:01.502) 0:16:39.358 **** 2026-02-18 06:08:06.918821 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918832 | orchestrator | 2026-02-18 06:08:06.918843 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:08:06.918854 | orchestrator | Wednesday 18 February 2026 06:07:51 +0000 (0:00:01.199) 0:16:40.557 **** 2026-02-18 06:08:06.918883 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918895 | orchestrator | 2026-02-18 06:08:06.918906 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:08:06.918917 | orchestrator | Wednesday 18 February 2026 06:07:52 +0000 (0:00:00.776) 0:16:41.334 **** 2026-02-18 06:08:06.918927 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.918938 | orchestrator | 2026-02-18 06:08:06.918949 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:08:06.918960 | orchestrator | Wednesday 18 February 2026 06:07:53 +0000 (0:00:00.783) 0:16:42.117 **** 2026-02-18 06:08:06.918971 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-18 06:08:06.918982 | orchestrator | 2026-02-18 06:08:06.918993 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:08:06.919004 | orchestrator | Wednesday 18 February 2026 06:07:54 +0000 (0:00:01.174) 0:16:43.291 **** 2026-02-18 06:08:06.919015 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.919026 | orchestrator | 2026-02-18 06:08:06.919037 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:08:06.919048 | orchestrator | Wednesday 18 February 2026 06:07:56 +0000 (0:00:01.754) 0:16:45.046 **** 2026-02-18 06:08:06.919059 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:08:06.919070 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:08:06.919081 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:08:06.919092 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.919103 | orchestrator | 2026-02-18 06:08:06.919114 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:08:06.919125 | orchestrator | Wednesday 18 February 2026 06:07:57 +0000 (0:00:01.126) 0:16:46.173 **** 2026-02-18 06:08:06.919136 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.919147 | orchestrator | 2026-02-18 06:08:06.919158 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:08:06.919169 | orchestrator | Wednesday 18 February 2026 06:07:58 +0000 (0:00:01.203) 0:16:47.376 **** 2026-02-18 06:08:06.919179 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.919190 | orchestrator | 2026-02-18 06:08:06.919201 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:08:06.919212 | orchestrator | Wednesday 18 February 2026 06:07:59 +0000 (0:00:01.182) 0:16:48.559 **** 2026-02-18 06:08:06.919223 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.919234 | orchestrator | 2026-02-18 06:08:06.919245 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:08:06.919256 | orchestrator | Wednesday 18 February 2026 06:08:00 +0000 (0:00:01.187) 0:16:49.746 **** 2026-02-18 06:08:06.919273 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.919284 | orchestrator | 2026-02-18 06:08:06.919295 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:08:06.919306 | orchestrator | Wednesday 18 February 2026 06:08:02 +0000 (0:00:01.175) 0:16:50.922 **** 2026-02-18 06:08:06.919317 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:06.919328 | orchestrator | 2026-02-18 06:08:06.919339 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:08:06.919350 | orchestrator | Wednesday 18 February 2026 06:08:02 +0000 (0:00:00.799) 0:16:51.721 **** 2026-02-18 06:08:06.919361 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.919372 | orchestrator | 2026-02-18 06:08:06.919383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:08:06.919394 | orchestrator | Wednesday 18 February 2026 06:08:04 +0000 (0:00:02.142) 0:16:53.864 **** 2026-02-18 06:08:06.919405 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:06.919416 | orchestrator | 2026-02-18 06:08:06.919427 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:08:06.919438 | orchestrator | Wednesday 18 February 2026 06:08:05 +0000 (0:00:00.811) 0:16:54.675 **** 2026-02-18 06:08:06.919449 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-18 06:08:06.919460 | orchestrator | 2026-02-18 06:08:06.919478 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:08:43.711445 | orchestrator | Wednesday 18 February 2026 06:08:06 +0000 (0:00:01.106) 0:16:55.781 **** 2026-02-18 06:08:43.711582 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711602 | orchestrator | 2026-02-18 06:08:43.711616 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:08:43.711627 | orchestrator | Wednesday 18 February 2026 06:08:08 +0000 (0:00:01.162) 0:16:56.944 **** 2026-02-18 06:08:43.711638 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711650 | orchestrator | 2026-02-18 06:08:43.711661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:08:43.711672 | orchestrator | Wednesday 18 February 2026 06:08:09 +0000 (0:00:01.158) 0:16:58.102 **** 2026-02-18 06:08:43.711683 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711693 | orchestrator | 2026-02-18 06:08:43.711704 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:08:43.711715 | orchestrator | Wednesday 18 February 2026 06:08:10 +0000 (0:00:01.159) 0:16:59.262 **** 2026-02-18 06:08:43.711726 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711737 | orchestrator | 2026-02-18 06:08:43.711748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:08:43.711758 | orchestrator | Wednesday 18 February 2026 06:08:11 +0000 (0:00:01.157) 0:17:00.419 **** 2026-02-18 06:08:43.711769 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711780 | orchestrator | 2026-02-18 06:08:43.711791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:08:43.711818 | orchestrator | Wednesday 18 February 2026 06:08:12 +0000 (0:00:01.176) 0:17:01.596 **** 2026-02-18 06:08:43.711829 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711841 | orchestrator | 2026-02-18 06:08:43.711852 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:08:43.711863 | orchestrator | Wednesday 18 February 2026 06:08:13 +0000 (0:00:01.156) 0:17:02.752 **** 2026-02-18 06:08:43.711874 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711885 | orchestrator | 2026-02-18 06:08:43.711896 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:08:43.711906 | orchestrator | Wednesday 18 February 2026 06:08:15 +0000 (0:00:01.201) 0:17:03.954 **** 2026-02-18 06:08:43.711948 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.711960 | orchestrator | 2026-02-18 06:08:43.711971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:08:43.711984 | orchestrator | Wednesday 18 February 2026 06:08:16 +0000 (0:00:01.184) 0:17:05.138 **** 2026-02-18 06:08:43.712021 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:08:43.712035 | orchestrator | 2026-02-18 06:08:43.712048 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:08:43.712062 | orchestrator | Wednesday 18 February 2026 06:08:17 +0000 (0:00:00.830) 0:17:05.969 **** 2026-02-18 06:08:43.712074 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-18 06:08:43.712087 | orchestrator | 2026-02-18 06:08:43.712099 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:08:43.712111 | orchestrator | Wednesday 18 February 2026 06:08:18 +0000 (0:00:01.199) 0:17:07.168 **** 2026-02-18 06:08:43.712124 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-18 06:08:43.712137 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-18 06:08:43.712150 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-18 06:08:43.712161 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-18 06:08:43.712171 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-18 06:08:43.712182 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-18 06:08:43.712193 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-18 06:08:43.712203 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:08:43.712214 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:08:43.712225 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:08:43.712236 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:08:43.712247 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:08:43.712257 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:08:43.712268 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:08:43.712279 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-18 06:08:43.712290 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-18 06:08:43.712300 | orchestrator | 2026-02-18 06:08:43.712311 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:08:43.712322 | orchestrator | Wednesday 18 February 2026 06:08:24 +0000 (0:00:06.251) 0:17:13.419 **** 2026-02-18 06:08:43.712333 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712344 | orchestrator | 2026-02-18 06:08:43.712354 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:08:43.712365 | orchestrator | Wednesday 18 February 2026 06:08:25 +0000 (0:00:00.800) 0:17:14.220 **** 2026-02-18 06:08:43.712376 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712387 | orchestrator | 2026-02-18 06:08:43.712398 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:08:43.712408 | orchestrator | Wednesday 18 February 2026 06:08:26 +0000 (0:00:00.773) 0:17:14.994 **** 2026-02-18 06:08:43.712419 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712430 | orchestrator | 2026-02-18 06:08:43.712441 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:08:43.712451 | orchestrator | Wednesday 18 February 2026 06:08:26 +0000 (0:00:00.766) 0:17:15.760 **** 2026-02-18 06:08:43.712462 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712473 | orchestrator | 2026-02-18 06:08:43.712484 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:08:43.712512 | orchestrator | Wednesday 18 February 2026 06:08:27 +0000 (0:00:00.774) 0:17:16.535 **** 2026-02-18 06:08:43.712524 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712535 | orchestrator | 2026-02-18 06:08:43.712546 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:08:43.712557 | orchestrator | Wednesday 18 February 2026 06:08:28 +0000 (0:00:00.791) 0:17:17.327 **** 2026-02-18 06:08:43.712575 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712586 | orchestrator | 2026-02-18 06:08:43.712597 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:08:43.712608 | orchestrator | Wednesday 18 February 2026 06:08:29 +0000 (0:00:00.815) 0:17:18.142 **** 2026-02-18 06:08:43.712619 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712629 | orchestrator | 2026-02-18 06:08:43.712640 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:08:43.712651 | orchestrator | Wednesday 18 February 2026 06:08:30 +0000 (0:00:00.777) 0:17:18.920 **** 2026-02-18 06:08:43.712662 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712673 | orchestrator | 2026-02-18 06:08:43.712684 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:08:43.712694 | orchestrator | Wednesday 18 February 2026 06:08:30 +0000 (0:00:00.813) 0:17:19.733 **** 2026-02-18 06:08:43.712705 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712716 | orchestrator | 2026-02-18 06:08:43.712732 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:08:43.712743 | orchestrator | Wednesday 18 February 2026 06:08:31 +0000 (0:00:00.799) 0:17:20.533 **** 2026-02-18 06:08:43.712754 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712765 | orchestrator | 2026-02-18 06:08:43.712776 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:08:43.712787 | orchestrator | Wednesday 18 February 2026 06:08:32 +0000 (0:00:00.794) 0:17:21.328 **** 2026-02-18 06:08:43.712797 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712808 | orchestrator | 2026-02-18 06:08:43.712819 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:08:43.712830 | orchestrator | Wednesday 18 February 2026 06:08:33 +0000 (0:00:00.774) 0:17:22.103 **** 2026-02-18 06:08:43.712840 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712851 | orchestrator | 2026-02-18 06:08:43.712862 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:08:43.712873 | orchestrator | Wednesday 18 February 2026 06:08:33 +0000 (0:00:00.747) 0:17:22.850 **** 2026-02-18 06:08:43.712884 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.712895 | orchestrator | 2026-02-18 06:08:43.712906 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:08:43.712986 | orchestrator | Wednesday 18 February 2026 06:08:34 +0000 (0:00:00.972) 0:17:23.823 **** 2026-02-18 06:08:43.713005 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713017 | orchestrator | 2026-02-18 06:08:43.713028 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:08:43.713039 | orchestrator | Wednesday 18 February 2026 06:08:35 +0000 (0:00:00.788) 0:17:24.612 **** 2026-02-18 06:08:43.713049 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713060 | orchestrator | 2026-02-18 06:08:43.713071 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:08:43.713082 | orchestrator | Wednesday 18 February 2026 06:08:36 +0000 (0:00:00.884) 0:17:25.496 **** 2026-02-18 06:08:43.713093 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713104 | orchestrator | 2026-02-18 06:08:43.713114 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:08:43.713125 | orchestrator | Wednesday 18 February 2026 06:08:37 +0000 (0:00:00.783) 0:17:26.280 **** 2026-02-18 06:08:43.713136 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713147 | orchestrator | 2026-02-18 06:08:43.713158 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:08:43.713170 | orchestrator | Wednesday 18 February 2026 06:08:38 +0000 (0:00:00.779) 0:17:27.059 **** 2026-02-18 06:08:43.713181 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713192 | orchestrator | 2026-02-18 06:08:43.713202 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:08:43.713221 | orchestrator | Wednesday 18 February 2026 06:08:38 +0000 (0:00:00.805) 0:17:27.865 **** 2026-02-18 06:08:43.713232 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713243 | orchestrator | 2026-02-18 06:08:43.713253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:08:43.713264 | orchestrator | Wednesday 18 February 2026 06:08:39 +0000 (0:00:00.891) 0:17:28.757 **** 2026-02-18 06:08:43.713275 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713286 | orchestrator | 2026-02-18 06:08:43.713297 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:08:43.713307 | orchestrator | Wednesday 18 February 2026 06:08:40 +0000 (0:00:00.820) 0:17:29.578 **** 2026-02-18 06:08:43.713318 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713329 | orchestrator | 2026-02-18 06:08:43.713340 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:08:43.713351 | orchestrator | Wednesday 18 February 2026 06:08:41 +0000 (0:00:00.809) 0:17:30.387 **** 2026-02-18 06:08:43.713362 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:08:43.713373 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:08:43.713383 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:08:43.713394 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:08:43.713405 | orchestrator | 2026-02-18 06:08:43.713416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:08:43.713426 | orchestrator | Wednesday 18 February 2026 06:08:42 +0000 (0:00:01.048) 0:17:31.436 **** 2026-02-18 06:08:43.713437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:08:43.713457 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:10:11.053922 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:10:11.054166 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.054191 | orchestrator | 2026-02-18 06:10:11.054207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:10:11.054230 | orchestrator | Wednesday 18 February 2026 06:08:43 +0000 (0:00:01.142) 0:17:32.578 **** 2026-02-18 06:10:11.054259 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:10:11.054281 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:10:11.054300 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:10:11.054319 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.054337 | orchestrator | 2026-02-18 06:10:11.054356 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:10:11.054375 | orchestrator | Wednesday 18 February 2026 06:08:44 +0000 (0:00:01.090) 0:17:33.669 **** 2026-02-18 06:10:11.054394 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.054412 | orchestrator | 2026-02-18 06:10:11.054430 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:10:11.054451 | orchestrator | Wednesday 18 February 2026 06:08:45 +0000 (0:00:00.815) 0:17:34.484 **** 2026-02-18 06:10:11.054472 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-18 06:10:11.054494 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.054514 | orchestrator | 2026-02-18 06:10:11.054552 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:10:11.054566 | orchestrator | Wednesday 18 February 2026 06:08:46 +0000 (0:00:01.042) 0:17:35.527 **** 2026-02-18 06:10:11.054579 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.054592 | orchestrator | 2026-02-18 06:10:11.054606 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:10:11.054619 | orchestrator | Wednesday 18 February 2026 06:08:48 +0000 (0:00:01.433) 0:17:36.960 **** 2026-02-18 06:10:11.054632 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.054645 | orchestrator | 2026-02-18 06:10:11.054658 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-18 06:10:11.054697 | orchestrator | Wednesday 18 February 2026 06:08:48 +0000 (0:00:00.791) 0:17:37.751 **** 2026-02-18 06:10:11.054710 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-18 06:10:11.054724 | orchestrator | 2026-02-18 06:10:11.054737 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-18 06:10:11.054749 | orchestrator | Wednesday 18 February 2026 06:08:50 +0000 (0:00:01.354) 0:17:39.106 **** 2026-02-18 06:10:11.054762 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.054775 | orchestrator | 2026-02-18 06:10:11.054788 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-18 06:10:11.054801 | orchestrator | Wednesday 18 February 2026 06:08:53 +0000 (0:00:03.166) 0:17:42.272 **** 2026-02-18 06:10:11.054816 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.054833 | orchestrator | 2026-02-18 06:10:11.054852 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-18 06:10:11.054869 | orchestrator | Wednesday 18 February 2026 06:08:54 +0000 (0:00:01.218) 0:17:43.490 **** 2026-02-18 06:10:11.054886 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.054905 | orchestrator | 2026-02-18 06:10:11.054917 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-18 06:10:11.054928 | orchestrator | Wednesday 18 February 2026 06:08:55 +0000 (0:00:01.157) 0:17:44.648 **** 2026-02-18 06:10:11.054939 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.054950 | orchestrator | 2026-02-18 06:10:11.054961 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-18 06:10:11.054971 | orchestrator | Wednesday 18 February 2026 06:08:56 +0000 (0:00:01.139) 0:17:45.788 **** 2026-02-18 06:10:11.054982 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:10:11.054993 | orchestrator | 2026-02-18 06:10:11.055004 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-18 06:10:11.055014 | orchestrator | Wednesday 18 February 2026 06:08:58 +0000 (0:00:02.030) 0:17:47.819 **** 2026-02-18 06:10:11.055069 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.055083 | orchestrator | 2026-02-18 06:10:11.055094 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-18 06:10:11.055105 | orchestrator | Wednesday 18 February 2026 06:09:00 +0000 (0:00:01.550) 0:17:49.369 **** 2026-02-18 06:10:11.055116 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.055127 | orchestrator | 2026-02-18 06:10:11.055140 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-18 06:10:11.055159 | orchestrator | Wednesday 18 February 2026 06:09:02 +0000 (0:00:01.513) 0:17:50.883 **** 2026-02-18 06:10:11.055184 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.055204 | orchestrator | 2026-02-18 06:10:11.055222 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-18 06:10:11.055240 | orchestrator | Wednesday 18 February 2026 06:09:03 +0000 (0:00:01.498) 0:17:52.382 **** 2026-02-18 06:10:11.055258 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:10:11.055274 | orchestrator | 2026-02-18 06:10:11.055291 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-18 06:10:11.055309 | orchestrator | Wednesday 18 February 2026 06:09:05 +0000 (0:00:01.585) 0:17:53.967 **** 2026-02-18 06:10:11.055326 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:10:11.055343 | orchestrator | 2026-02-18 06:10:11.055361 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-18 06:10:11.055377 | orchestrator | Wednesday 18 February 2026 06:09:06 +0000 (0:00:01.535) 0:17:55.503 **** 2026-02-18 06:10:11.055394 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:10:11.055411 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-18 06:10:11.055430 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-18 06:10:11.055448 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-18 06:10:11.055467 | orchestrator | 2026-02-18 06:10:11.055535 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-18 06:10:11.055555 | orchestrator | Wednesday 18 February 2026 06:09:10 +0000 (0:00:04.260) 0:17:59.764 **** 2026-02-18 06:10:11.055574 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:10:11.055593 | orchestrator | 2026-02-18 06:10:11.055611 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-18 06:10:11.055630 | orchestrator | Wednesday 18 February 2026 06:09:12 +0000 (0:00:02.023) 0:18:01.787 **** 2026-02-18 06:10:11.055647 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.055665 | orchestrator | 2026-02-18 06:10:11.055682 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-18 06:10:11.055698 | orchestrator | Wednesday 18 February 2026 06:09:14 +0000 (0:00:01.203) 0:18:02.991 **** 2026-02-18 06:10:11.055715 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.055733 | orchestrator | 2026-02-18 06:10:11.055751 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-18 06:10:11.055769 | orchestrator | Wednesday 18 February 2026 06:09:15 +0000 (0:00:01.196) 0:18:04.187 **** 2026-02-18 06:10:11.055786 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.055803 | orchestrator | 2026-02-18 06:10:11.055821 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-18 06:10:11.055839 | orchestrator | Wednesday 18 February 2026 06:09:17 +0000 (0:00:01.741) 0:18:05.928 **** 2026-02-18 06:10:11.055857 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.055876 | orchestrator | 2026-02-18 06:10:11.055908 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-18 06:10:11.055927 | orchestrator | Wednesday 18 February 2026 06:09:18 +0000 (0:00:01.485) 0:18:07.414 **** 2026-02-18 06:10:11.055945 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.055964 | orchestrator | 2026-02-18 06:10:11.055980 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-18 06:10:11.055999 | orchestrator | Wednesday 18 February 2026 06:09:19 +0000 (0:00:00.788) 0:18:08.203 **** 2026-02-18 06:10:11.056019 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-18 06:10:11.056068 | orchestrator | 2026-02-18 06:10:11.056086 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-18 06:10:11.056105 | orchestrator | Wednesday 18 February 2026 06:09:20 +0000 (0:00:01.152) 0:18:09.355 **** 2026-02-18 06:10:11.056123 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.056142 | orchestrator | 2026-02-18 06:10:11.056160 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-18 06:10:11.056178 | orchestrator | Wednesday 18 February 2026 06:09:21 +0000 (0:00:01.158) 0:18:10.514 **** 2026-02-18 06:10:11.056198 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.056217 | orchestrator | 2026-02-18 06:10:11.056235 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-18 06:10:11.056252 | orchestrator | Wednesday 18 February 2026 06:09:22 +0000 (0:00:01.224) 0:18:11.739 **** 2026-02-18 06:10:11.056264 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-18 06:10:11.056274 | orchestrator | 2026-02-18 06:10:11.056285 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-18 06:10:11.056296 | orchestrator | Wednesday 18 February 2026 06:09:24 +0000 (0:00:01.145) 0:18:12.885 **** 2026-02-18 06:10:11.056307 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.056318 | orchestrator | 2026-02-18 06:10:11.056329 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-18 06:10:11.056339 | orchestrator | Wednesday 18 February 2026 06:09:26 +0000 (0:00:02.589) 0:18:15.474 **** 2026-02-18 06:10:11.056350 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.056361 | orchestrator | 2026-02-18 06:10:11.056372 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-18 06:10:11.056383 | orchestrator | Wednesday 18 February 2026 06:09:28 +0000 (0:00:01.954) 0:18:17.429 **** 2026-02-18 06:10:11.056407 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.056418 | orchestrator | 2026-02-18 06:10:11.056429 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-18 06:10:11.056440 | orchestrator | Wednesday 18 February 2026 06:09:31 +0000 (0:00:02.462) 0:18:19.892 **** 2026-02-18 06:10:11.056451 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:10:11.056468 | orchestrator | 2026-02-18 06:10:11.056486 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-18 06:10:11.056504 | orchestrator | Wednesday 18 February 2026 06:09:33 +0000 (0:00:02.795) 0:18:22.687 **** 2026-02-18 06:10:11.056521 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-18 06:10:11.056540 | orchestrator | 2026-02-18 06:10:11.056557 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-18 06:10:11.056574 | orchestrator | Wednesday 18 February 2026 06:09:34 +0000 (0:00:01.146) 0:18:23.834 **** 2026-02-18 06:10:11.056585 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-18 06:10:11.056596 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.056607 | orchestrator | 2026-02-18 06:10:11.056618 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-18 06:10:11.056629 | orchestrator | Wednesday 18 February 2026 06:09:57 +0000 (0:00:23.021) 0:18:46.855 **** 2026-02-18 06:10:11.056640 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:11.056650 | orchestrator | 2026-02-18 06:10:11.056661 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-18 06:10:11.056672 | orchestrator | Wednesday 18 February 2026 06:10:00 +0000 (0:00:02.641) 0:18:49.497 **** 2026-02-18 06:10:11.056683 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:11.056694 | orchestrator | 2026-02-18 06:10:11.056704 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-18 06:10:11.056715 | orchestrator | Wednesday 18 February 2026 06:10:01 +0000 (0:00:00.837) 0:18:50.334 **** 2026-02-18 06:10:11.056743 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-18 06:10:46.978664 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-18 06:10:46.978787 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-18 06:10:46.978821 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-18 06:10:46.978836 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-18 06:10:46.978872 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0b8c8beb7758c62e0f4566ce31d3ab83800aacb1'}])  2026-02-18 06:10:46.978887 | orchestrator | 2026-02-18 06:10:46.978899 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-18 06:10:46.978912 | orchestrator | Wednesday 18 February 2026 06:10:11 +0000 (0:00:09.583) 0:18:59.918 **** 2026-02-18 06:10:46.978923 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:10:46.978935 | orchestrator | 2026-02-18 06:10:46.978946 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:10:46.978957 | orchestrator | Wednesday 18 February 2026 06:10:13 +0000 (0:00:01.999) 0:19:01.917 **** 2026-02-18 06:10:46.978968 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:10:46.978979 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-18 06:10:46.978990 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-18 06:10:46.979001 | orchestrator | 2026-02-18 06:10:46.979011 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:10:46.979022 | orchestrator | Wednesday 18 February 2026 06:10:15 +0000 (0:00:01.963) 0:19:03.881 **** 2026-02-18 06:10:46.979033 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 06:10:46.979044 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 06:10:46.979055 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 06:10:46.979099 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:46.979112 | orchestrator | 2026-02-18 06:10:46.979122 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-18 06:10:46.979133 | orchestrator | Wednesday 18 February 2026 06:10:16 +0000 (0:00:01.074) 0:19:04.955 **** 2026-02-18 06:10:46.979144 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:10:46.979155 | orchestrator | 2026-02-18 06:10:46.979165 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-18 06:10:46.979176 | orchestrator | Wednesday 18 February 2026 06:10:16 +0000 (0:00:00.767) 0:19:05.723 **** 2026-02-18 06:10:46.979187 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:46.979200 | orchestrator | 2026-02-18 06:10:46.979213 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-18 06:10:46.979225 | orchestrator | 2026-02-18 06:10:46.979238 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-18 06:10:46.979250 | orchestrator | Wednesday 18 February 2026 06:10:20 +0000 (0:00:03.191) 0:19:08.915 **** 2026-02-18 06:10:46.979262 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:10:46.979275 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:10:46.979287 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:10:46.979300 | orchestrator | 2026-02-18 06:10:46.979311 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-18 06:10:46.979322 | orchestrator | 2026-02-18 06:10:46.979332 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-18 06:10:46.979350 | orchestrator | Wednesday 18 February 2026 06:10:21 +0000 (0:00:01.495) 0:19:10.410 **** 2026-02-18 06:10:46.979369 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979388 | orchestrator | 2026-02-18 06:10:46.979407 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:10:46.979444 | orchestrator | Wednesday 18 February 2026 06:10:22 +0000 (0:00:01.138) 0:19:11.548 **** 2026-02-18 06:10:46.979456 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979467 | orchestrator | 2026-02-18 06:10:46.979478 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:10:46.979498 | orchestrator | Wednesday 18 February 2026 06:10:23 +0000 (0:00:01.126) 0:19:12.675 **** 2026-02-18 06:10:46.979510 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979521 | orchestrator | 2026-02-18 06:10:46.979532 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:10:46.979543 | orchestrator | Wednesday 18 February 2026 06:10:25 +0000 (0:00:01.206) 0:19:13.882 **** 2026-02-18 06:10:46.979554 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979565 | orchestrator | 2026-02-18 06:10:46.979576 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:10:46.979587 | orchestrator | Wednesday 18 February 2026 06:10:26 +0000 (0:00:01.160) 0:19:15.042 **** 2026-02-18 06:10:46.979598 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979608 | orchestrator | 2026-02-18 06:10:46.979619 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:10:46.979636 | orchestrator | Wednesday 18 February 2026 06:10:27 +0000 (0:00:01.133) 0:19:16.176 **** 2026-02-18 06:10:46.979647 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979658 | orchestrator | 2026-02-18 06:10:46.979669 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:10:46.979680 | orchestrator | Wednesday 18 February 2026 06:10:28 +0000 (0:00:01.139) 0:19:17.316 **** 2026-02-18 06:10:46.979691 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979702 | orchestrator | 2026-02-18 06:10:46.979712 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:10:46.979723 | orchestrator | Wednesday 18 February 2026 06:10:29 +0000 (0:00:01.192) 0:19:18.508 **** 2026-02-18 06:10:46.979734 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979745 | orchestrator | 2026-02-18 06:10:46.979756 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:10:46.979767 | orchestrator | Wednesday 18 February 2026 06:10:30 +0000 (0:00:01.175) 0:19:19.684 **** 2026-02-18 06:10:46.979777 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979788 | orchestrator | 2026-02-18 06:10:46.979799 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:10:46.979810 | orchestrator | Wednesday 18 February 2026 06:10:31 +0000 (0:00:01.136) 0:19:20.820 **** 2026-02-18 06:10:46.979821 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979831 | orchestrator | 2026-02-18 06:10:46.979842 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:10:46.979853 | orchestrator | Wednesday 18 February 2026 06:10:33 +0000 (0:00:01.132) 0:19:21.953 **** 2026-02-18 06:10:46.979864 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979875 | orchestrator | 2026-02-18 06:10:46.979885 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:10:46.979896 | orchestrator | Wednesday 18 February 2026 06:10:34 +0000 (0:00:01.113) 0:19:23.066 **** 2026-02-18 06:10:46.979907 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979918 | orchestrator | 2026-02-18 06:10:46.979929 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:10:46.979940 | orchestrator | Wednesday 18 February 2026 06:10:35 +0000 (0:00:01.166) 0:19:24.233 **** 2026-02-18 06:10:46.979950 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.979961 | orchestrator | 2026-02-18 06:10:46.979972 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:10:46.979983 | orchestrator | Wednesday 18 February 2026 06:10:36 +0000 (0:00:01.115) 0:19:25.348 **** 2026-02-18 06:10:46.979993 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980004 | orchestrator | 2026-02-18 06:10:46.980015 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:10:46.980026 | orchestrator | Wednesday 18 February 2026 06:10:37 +0000 (0:00:01.228) 0:19:26.577 **** 2026-02-18 06:10:46.980037 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980047 | orchestrator | 2026-02-18 06:10:46.980058 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:10:46.980155 | orchestrator | Wednesday 18 February 2026 06:10:38 +0000 (0:00:01.119) 0:19:27.697 **** 2026-02-18 06:10:46.980168 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980180 | orchestrator | 2026-02-18 06:10:46.980191 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:10:46.980202 | orchestrator | Wednesday 18 February 2026 06:10:39 +0000 (0:00:01.129) 0:19:28.826 **** 2026-02-18 06:10:46.980212 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980223 | orchestrator | 2026-02-18 06:10:46.980234 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:10:46.980245 | orchestrator | Wednesday 18 February 2026 06:10:41 +0000 (0:00:01.275) 0:19:30.102 **** 2026-02-18 06:10:46.980256 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980267 | orchestrator | 2026-02-18 06:10:46.980278 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:10:46.980289 | orchestrator | Wednesday 18 February 2026 06:10:42 +0000 (0:00:01.188) 0:19:31.291 **** 2026-02-18 06:10:46.980299 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980310 | orchestrator | 2026-02-18 06:10:46.980321 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:10:46.980332 | orchestrator | Wednesday 18 February 2026 06:10:43 +0000 (0:00:01.133) 0:19:32.424 **** 2026-02-18 06:10:46.980343 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980354 | orchestrator | 2026-02-18 06:10:46.980365 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:10:46.980375 | orchestrator | Wednesday 18 February 2026 06:10:44 +0000 (0:00:01.119) 0:19:33.544 **** 2026-02-18 06:10:46.980392 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:10:46.980412 | orchestrator | 2026-02-18 06:10:46.980430 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:10:46.980450 | orchestrator | Wednesday 18 February 2026 06:10:45 +0000 (0:00:01.174) 0:19:34.719 **** 2026-02-18 06:10:46.980481 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.895618 | orchestrator | 2026-02-18 06:11:31.895739 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:11:31.895757 | orchestrator | Wednesday 18 February 2026 06:10:46 +0000 (0:00:01.124) 0:19:35.844 **** 2026-02-18 06:11:31.895769 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.895781 | orchestrator | 2026-02-18 06:11:31.895793 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:11:31.895804 | orchestrator | Wednesday 18 February 2026 06:10:48 +0000 (0:00:01.153) 0:19:36.998 **** 2026-02-18 06:11:31.895816 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.895827 | orchestrator | 2026-02-18 06:11:31.895838 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:11:31.895849 | orchestrator | Wednesday 18 February 2026 06:10:49 +0000 (0:00:01.159) 0:19:38.158 **** 2026-02-18 06:11:31.895860 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.895871 | orchestrator | 2026-02-18 06:11:31.895882 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:11:31.895910 | orchestrator | Wednesday 18 February 2026 06:10:50 +0000 (0:00:01.186) 0:19:39.345 **** 2026-02-18 06:11:31.895930 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.895949 | orchestrator | 2026-02-18 06:11:31.895965 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:11:31.895976 | orchestrator | Wednesday 18 February 2026 06:10:51 +0000 (0:00:01.176) 0:19:40.521 **** 2026-02-18 06:11:31.895987 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.895998 | orchestrator | 2026-02-18 06:11:31.896008 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:11:31.896019 | orchestrator | Wednesday 18 February 2026 06:10:52 +0000 (0:00:01.155) 0:19:41.677 **** 2026-02-18 06:11:31.896030 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896042 | orchestrator | 2026-02-18 06:11:31.896061 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:11:31.896108 | orchestrator | Wednesday 18 February 2026 06:10:53 +0000 (0:00:01.142) 0:19:42.820 **** 2026-02-18 06:11:31.896159 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896173 | orchestrator | 2026-02-18 06:11:31.896185 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:11:31.896198 | orchestrator | Wednesday 18 February 2026 06:10:55 +0000 (0:00:01.230) 0:19:44.050 **** 2026-02-18 06:11:31.896210 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896223 | orchestrator | 2026-02-18 06:11:31.896235 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:11:31.896248 | orchestrator | Wednesday 18 February 2026 06:10:56 +0000 (0:00:01.153) 0:19:45.204 **** 2026-02-18 06:11:31.896260 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896272 | orchestrator | 2026-02-18 06:11:31.896290 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:11:31.896307 | orchestrator | Wednesday 18 February 2026 06:10:57 +0000 (0:00:01.123) 0:19:46.328 **** 2026-02-18 06:11:31.896319 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896332 | orchestrator | 2026-02-18 06:11:31.896344 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:11:31.896356 | orchestrator | Wednesday 18 February 2026 06:10:58 +0000 (0:00:01.221) 0:19:47.549 **** 2026-02-18 06:11:31.896369 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896382 | orchestrator | 2026-02-18 06:11:31.896394 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:11:31.896406 | orchestrator | Wednesday 18 February 2026 06:10:59 +0000 (0:00:01.165) 0:19:48.715 **** 2026-02-18 06:11:31.896419 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896431 | orchestrator | 2026-02-18 06:11:31.896443 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:11:31.896456 | orchestrator | Wednesday 18 February 2026 06:11:00 +0000 (0:00:01.142) 0:19:49.858 **** 2026-02-18 06:11:31.896468 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896481 | orchestrator | 2026-02-18 06:11:31.896493 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:11:31.896503 | orchestrator | Wednesday 18 February 2026 06:11:02 +0000 (0:00:01.140) 0:19:50.998 **** 2026-02-18 06:11:31.896514 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896525 | orchestrator | 2026-02-18 06:11:31.896536 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:11:31.896546 | orchestrator | Wednesday 18 February 2026 06:11:03 +0000 (0:00:01.140) 0:19:52.139 **** 2026-02-18 06:11:31.896557 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896568 | orchestrator | 2026-02-18 06:11:31.896579 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:11:31.896590 | orchestrator | Wednesday 18 February 2026 06:11:04 +0000 (0:00:01.143) 0:19:53.283 **** 2026-02-18 06:11:31.896601 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896611 | orchestrator | 2026-02-18 06:11:31.896622 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:11:31.896633 | orchestrator | Wednesday 18 February 2026 06:11:05 +0000 (0:00:01.157) 0:19:54.440 **** 2026-02-18 06:11:31.896643 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896654 | orchestrator | 2026-02-18 06:11:31.896665 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:11:31.896677 | orchestrator | Wednesday 18 February 2026 06:11:06 +0000 (0:00:01.166) 0:19:55.607 **** 2026-02-18 06:11:31.896688 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896698 | orchestrator | 2026-02-18 06:11:31.896709 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:11:31.896720 | orchestrator | Wednesday 18 February 2026 06:11:07 +0000 (0:00:01.194) 0:19:56.802 **** 2026-02-18 06:11:31.896730 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896753 | orchestrator | 2026-02-18 06:11:31.896764 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:11:31.896775 | orchestrator | Wednesday 18 February 2026 06:11:09 +0000 (0:00:01.165) 0:19:57.967 **** 2026-02-18 06:11:31.896805 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896816 | orchestrator | 2026-02-18 06:11:31.896827 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:11:31.896838 | orchestrator | Wednesday 18 February 2026 06:11:10 +0000 (0:00:01.172) 0:19:59.140 **** 2026-02-18 06:11:31.896849 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896860 | orchestrator | 2026-02-18 06:11:31.896871 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:11:31.896881 | orchestrator | Wednesday 18 February 2026 06:11:11 +0000 (0:00:01.179) 0:20:00.319 **** 2026-02-18 06:11:31.896892 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896903 | orchestrator | 2026-02-18 06:11:31.896914 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:11:31.896925 | orchestrator | Wednesday 18 February 2026 06:11:12 +0000 (0:00:01.174) 0:20:01.494 **** 2026-02-18 06:11:31.896936 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896947 | orchestrator | 2026-02-18 06:11:31.896957 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:11:31.896975 | orchestrator | Wednesday 18 February 2026 06:11:13 +0000 (0:00:01.124) 0:20:02.618 **** 2026-02-18 06:11:31.896986 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.896997 | orchestrator | 2026-02-18 06:11:31.897008 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:11:31.897018 | orchestrator | Wednesday 18 February 2026 06:11:15 +0000 (0:00:01.285) 0:20:03.903 **** 2026-02-18 06:11:31.897029 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897040 | orchestrator | 2026-02-18 06:11:31.897051 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:11:31.897062 | orchestrator | Wednesday 18 February 2026 06:11:16 +0000 (0:00:01.195) 0:20:05.098 **** 2026-02-18 06:11:31.897072 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897083 | orchestrator | 2026-02-18 06:11:31.897094 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:11:31.897105 | orchestrator | Wednesday 18 February 2026 06:11:17 +0000 (0:00:01.348) 0:20:06.447 **** 2026-02-18 06:11:31.897139 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897151 | orchestrator | 2026-02-18 06:11:31.897162 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:11:31.897173 | orchestrator | Wednesday 18 February 2026 06:11:18 +0000 (0:00:01.148) 0:20:07.595 **** 2026-02-18 06:11:31.897184 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897194 | orchestrator | 2026-02-18 06:11:31.897205 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:11:31.897218 | orchestrator | Wednesday 18 February 2026 06:11:19 +0000 (0:00:01.160) 0:20:08.756 **** 2026-02-18 06:11:31.897229 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897240 | orchestrator | 2026-02-18 06:11:31.897251 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:11:31.897262 | orchestrator | Wednesday 18 February 2026 06:11:21 +0000 (0:00:01.171) 0:20:09.927 **** 2026-02-18 06:11:31.897272 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897283 | orchestrator | 2026-02-18 06:11:31.897294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:11:31.897304 | orchestrator | Wednesday 18 February 2026 06:11:22 +0000 (0:00:01.166) 0:20:11.094 **** 2026-02-18 06:11:31.897315 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897326 | orchestrator | 2026-02-18 06:11:31.897337 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:11:31.897348 | orchestrator | Wednesday 18 February 2026 06:11:23 +0000 (0:00:01.139) 0:20:12.233 **** 2026-02-18 06:11:31.897366 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897377 | orchestrator | 2026-02-18 06:11:31.897388 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:11:31.897398 | orchestrator | Wednesday 18 February 2026 06:11:24 +0000 (0:00:01.152) 0:20:13.386 **** 2026-02-18 06:11:31.897409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:11:31.897420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:11:31.897431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:11:31.897441 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897452 | orchestrator | 2026-02-18 06:11:31.897463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:11:31.897474 | orchestrator | Wednesday 18 February 2026 06:11:26 +0000 (0:00:01.804) 0:20:15.190 **** 2026-02-18 06:11:31.897485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:11:31.897495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:11:31.897506 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:11:31.897517 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897528 | orchestrator | 2026-02-18 06:11:31.897539 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:11:31.897549 | orchestrator | Wednesday 18 February 2026 06:11:27 +0000 (0:00:01.465) 0:20:16.656 **** 2026-02-18 06:11:31.897560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:11:31.897571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:11:31.897582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:11:31.897592 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897603 | orchestrator | 2026-02-18 06:11:31.897614 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:11:31.897625 | orchestrator | Wednesday 18 February 2026 06:11:29 +0000 (0:00:01.548) 0:20:18.204 **** 2026-02-18 06:11:31.897635 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:11:31.897646 | orchestrator | 2026-02-18 06:11:31.897657 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:11:31.897668 | orchestrator | Wednesday 18 February 2026 06:11:30 +0000 (0:00:01.194) 0:20:19.398 **** 2026-02-18 06:11:31.897679 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-18 06:11:31.897698 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:12:06.310332 | orchestrator | 2026-02-18 06:12:06.310445 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:12:06.310464 | orchestrator | Wednesday 18 February 2026 06:11:31 +0000 (0:00:01.361) 0:20:20.760 **** 2026-02-18 06:12:06.310476 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:12:06.310489 | orchestrator | 2026-02-18 06:12:06.310501 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:12:06.310512 | orchestrator | Wednesday 18 February 2026 06:11:33 +0000 (0:00:01.170) 0:20:21.930 **** 2026-02-18 06:12:06.310523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 06:12:06.310534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 06:12:06.310544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 06:12:06.310554 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:12:06.310564 | orchestrator | 2026-02-18 06:12:06.310575 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-18 06:12:06.310602 | orchestrator | Wednesday 18 February 2026 06:11:34 +0000 (0:00:01.413) 0:20:23.344 **** 2026-02-18 06:12:06.310614 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:12:06.310625 | orchestrator | 2026-02-18 06:12:06.310635 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-18 06:12:06.310645 | orchestrator | Wednesday 18 February 2026 06:11:35 +0000 (0:00:01.176) 0:20:24.521 **** 2026-02-18 06:12:06.310680 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:12:06.310692 | orchestrator | 2026-02-18 06:12:06.310701 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-18 06:12:06.310711 | orchestrator | Wednesday 18 February 2026 06:11:36 +0000 (0:00:01.132) 0:20:25.653 **** 2026-02-18 06:12:06.310721 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:12:06.310731 | orchestrator | 2026-02-18 06:12:06.310740 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-18 06:12:06.310749 | orchestrator | Wednesday 18 February 2026 06:11:37 +0000 (0:00:01.145) 0:20:26.799 **** 2026-02-18 06:12:06.310759 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:12:06.310769 | orchestrator | 2026-02-18 06:12:06.310778 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-18 06:12:06.310788 | orchestrator | 2026-02-18 06:12:06.310797 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-18 06:12:06.310807 | orchestrator | Wednesday 18 February 2026 06:11:39 +0000 (0:00:01.305) 0:20:28.104 **** 2026-02-18 06:12:06.310817 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.310826 | orchestrator | 2026-02-18 06:12:06.310836 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:12:06.310847 | orchestrator | Wednesday 18 February 2026 06:11:40 +0000 (0:00:00.774) 0:20:28.878 **** 2026-02-18 06:12:06.310858 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.310868 | orchestrator | 2026-02-18 06:12:06.310879 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:12:06.310888 | orchestrator | Wednesday 18 February 2026 06:11:40 +0000 (0:00:00.811) 0:20:29.690 **** 2026-02-18 06:12:06.310898 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.310907 | orchestrator | 2026-02-18 06:12:06.310917 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:12:06.310926 | orchestrator | Wednesday 18 February 2026 06:11:41 +0000 (0:00:00.807) 0:20:30.498 **** 2026-02-18 06:12:06.310936 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.310946 | orchestrator | 2026-02-18 06:12:06.310956 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:12:06.310965 | orchestrator | Wednesday 18 February 2026 06:11:42 +0000 (0:00:01.360) 0:20:31.858 **** 2026-02-18 06:12:06.310974 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.310983 | orchestrator | 2026-02-18 06:12:06.310992 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:12:06.311001 | orchestrator | Wednesday 18 February 2026 06:11:43 +0000 (0:00:00.763) 0:20:32.621 **** 2026-02-18 06:12:06.311010 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311019 | orchestrator | 2026-02-18 06:12:06.311028 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:12:06.311037 | orchestrator | Wednesday 18 February 2026 06:11:44 +0000 (0:00:00.827) 0:20:33.449 **** 2026-02-18 06:12:06.311047 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311056 | orchestrator | 2026-02-18 06:12:06.311065 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:12:06.311073 | orchestrator | Wednesday 18 February 2026 06:11:45 +0000 (0:00:00.797) 0:20:34.247 **** 2026-02-18 06:12:06.311082 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311092 | orchestrator | 2026-02-18 06:12:06.311101 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:12:06.311110 | orchestrator | Wednesday 18 February 2026 06:11:46 +0000 (0:00:00.781) 0:20:35.029 **** 2026-02-18 06:12:06.311120 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311129 | orchestrator | 2026-02-18 06:12:06.311138 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:12:06.311148 | orchestrator | Wednesday 18 February 2026 06:11:46 +0000 (0:00:00.783) 0:20:35.812 **** 2026-02-18 06:12:06.311192 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311202 | orchestrator | 2026-02-18 06:12:06.311212 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:12:06.311231 | orchestrator | Wednesday 18 February 2026 06:11:47 +0000 (0:00:00.805) 0:20:36.618 **** 2026-02-18 06:12:06.311241 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311250 | orchestrator | 2026-02-18 06:12:06.311260 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:12:06.311269 | orchestrator | Wednesday 18 February 2026 06:11:48 +0000 (0:00:00.788) 0:20:37.407 **** 2026-02-18 06:12:06.311278 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311287 | orchestrator | 2026-02-18 06:12:06.311297 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:12:06.311305 | orchestrator | Wednesday 18 February 2026 06:11:49 +0000 (0:00:00.827) 0:20:38.234 **** 2026-02-18 06:12:06.311314 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311323 | orchestrator | 2026-02-18 06:12:06.311354 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:12:06.311364 | orchestrator | Wednesday 18 February 2026 06:11:50 +0000 (0:00:00.803) 0:20:39.038 **** 2026-02-18 06:12:06.311373 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311381 | orchestrator | 2026-02-18 06:12:06.311389 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:12:06.311398 | orchestrator | Wednesday 18 February 2026 06:11:50 +0000 (0:00:00.798) 0:20:39.836 **** 2026-02-18 06:12:06.311406 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311415 | orchestrator | 2026-02-18 06:12:06.311424 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:12:06.311433 | orchestrator | Wednesday 18 February 2026 06:11:51 +0000 (0:00:00.797) 0:20:40.634 **** 2026-02-18 06:12:06.311442 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311450 | orchestrator | 2026-02-18 06:12:06.311455 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:12:06.311468 | orchestrator | Wednesday 18 February 2026 06:11:52 +0000 (0:00:00.797) 0:20:41.432 **** 2026-02-18 06:12:06.311474 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311479 | orchestrator | 2026-02-18 06:12:06.311485 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:12:06.311490 | orchestrator | Wednesday 18 February 2026 06:11:53 +0000 (0:00:00.768) 0:20:42.200 **** 2026-02-18 06:12:06.311496 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311501 | orchestrator | 2026-02-18 06:12:06.311507 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:12:06.311512 | orchestrator | Wednesday 18 February 2026 06:11:54 +0000 (0:00:00.818) 0:20:43.019 **** 2026-02-18 06:12:06.311517 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311523 | orchestrator | 2026-02-18 06:12:06.311528 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:12:06.311536 | orchestrator | Wednesday 18 February 2026 06:11:54 +0000 (0:00:00.805) 0:20:43.825 **** 2026-02-18 06:12:06.311541 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311546 | orchestrator | 2026-02-18 06:12:06.311552 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:12:06.311557 | orchestrator | Wednesday 18 February 2026 06:11:55 +0000 (0:00:00.809) 0:20:44.635 **** 2026-02-18 06:12:06.311563 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311568 | orchestrator | 2026-02-18 06:12:06.311573 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:12:06.311579 | orchestrator | Wednesday 18 February 2026 06:11:56 +0000 (0:00:00.813) 0:20:45.448 **** 2026-02-18 06:12:06.311584 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311590 | orchestrator | 2026-02-18 06:12:06.311595 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:12:06.311600 | orchestrator | Wednesday 18 February 2026 06:11:57 +0000 (0:00:00.830) 0:20:46.279 **** 2026-02-18 06:12:06.311606 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311611 | orchestrator | 2026-02-18 06:12:06.311623 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:12:06.311628 | orchestrator | Wednesday 18 February 2026 06:11:58 +0000 (0:00:00.791) 0:20:47.071 **** 2026-02-18 06:12:06.311634 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311643 | orchestrator | 2026-02-18 06:12:06.311652 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:12:06.311660 | orchestrator | Wednesday 18 February 2026 06:11:59 +0000 (0:00:00.898) 0:20:47.970 **** 2026-02-18 06:12:06.311669 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311678 | orchestrator | 2026-02-18 06:12:06.311686 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:12:06.311695 | orchestrator | Wednesday 18 February 2026 06:11:59 +0000 (0:00:00.769) 0:20:48.739 **** 2026-02-18 06:12:06.311704 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311713 | orchestrator | 2026-02-18 06:12:06.311722 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:12:06.311732 | orchestrator | Wednesday 18 February 2026 06:12:00 +0000 (0:00:00.786) 0:20:49.526 **** 2026-02-18 06:12:06.311739 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311745 | orchestrator | 2026-02-18 06:12:06.311750 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:12:06.311755 | orchestrator | Wednesday 18 February 2026 06:12:01 +0000 (0:00:00.802) 0:20:50.329 **** 2026-02-18 06:12:06.311761 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311766 | orchestrator | 2026-02-18 06:12:06.311772 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:12:06.311777 | orchestrator | Wednesday 18 February 2026 06:12:02 +0000 (0:00:00.794) 0:20:51.123 **** 2026-02-18 06:12:06.311783 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311788 | orchestrator | 2026-02-18 06:12:06.311794 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:12:06.311799 | orchestrator | Wednesday 18 February 2026 06:12:03 +0000 (0:00:00.774) 0:20:51.898 **** 2026-02-18 06:12:06.311804 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311810 | orchestrator | 2026-02-18 06:12:06.311815 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:12:06.311821 | orchestrator | Wednesday 18 February 2026 06:12:03 +0000 (0:00:00.831) 0:20:52.729 **** 2026-02-18 06:12:06.311826 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311832 | orchestrator | 2026-02-18 06:12:06.311837 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:12:06.311843 | orchestrator | Wednesday 18 February 2026 06:12:04 +0000 (0:00:00.814) 0:20:53.544 **** 2026-02-18 06:12:06.311848 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311854 | orchestrator | 2026-02-18 06:12:06.311859 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:12:06.311864 | orchestrator | Wednesday 18 February 2026 06:12:05 +0000 (0:00:00.847) 0:20:54.391 **** 2026-02-18 06:12:06.311870 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:06.311875 | orchestrator | 2026-02-18 06:12:06.311888 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:12:37.721331 | orchestrator | Wednesday 18 February 2026 06:12:06 +0000 (0:00:00.783) 0:20:55.174 **** 2026-02-18 06:12:37.721447 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721464 | orchestrator | 2026-02-18 06:12:37.721477 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:12:37.721506 | orchestrator | Wednesday 18 February 2026 06:12:07 +0000 (0:00:00.809) 0:20:55.984 **** 2026-02-18 06:12:37.721517 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721539 | orchestrator | 2026-02-18 06:12:37.721551 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:12:37.721562 | orchestrator | Wednesday 18 February 2026 06:12:07 +0000 (0:00:00.838) 0:20:56.823 **** 2026-02-18 06:12:37.721573 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721609 | orchestrator | 2026-02-18 06:12:37.721620 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:12:37.721631 | orchestrator | Wednesday 18 February 2026 06:12:08 +0000 (0:00:00.786) 0:20:57.610 **** 2026-02-18 06:12:37.721657 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721668 | orchestrator | 2026-02-18 06:12:37.721679 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:12:37.721689 | orchestrator | Wednesday 18 February 2026 06:12:09 +0000 (0:00:00.829) 0:20:58.440 **** 2026-02-18 06:12:37.721700 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721711 | orchestrator | 2026-02-18 06:12:37.721722 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:12:37.721733 | orchestrator | Wednesday 18 February 2026 06:12:10 +0000 (0:00:00.828) 0:20:59.268 **** 2026-02-18 06:12:37.721744 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721755 | orchestrator | 2026-02-18 06:12:37.721767 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:12:37.721779 | orchestrator | Wednesday 18 February 2026 06:12:11 +0000 (0:00:00.812) 0:21:00.081 **** 2026-02-18 06:12:37.721790 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721802 | orchestrator | 2026-02-18 06:12:37.721814 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:12:37.721827 | orchestrator | Wednesday 18 February 2026 06:12:12 +0000 (0:00:00.814) 0:21:00.895 **** 2026-02-18 06:12:37.721840 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721853 | orchestrator | 2026-02-18 06:12:37.721865 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:12:37.721877 | orchestrator | Wednesday 18 February 2026 06:12:12 +0000 (0:00:00.816) 0:21:01.712 **** 2026-02-18 06:12:37.721889 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721902 | orchestrator | 2026-02-18 06:12:37.721914 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:12:37.721926 | orchestrator | Wednesday 18 February 2026 06:12:13 +0000 (0:00:00.800) 0:21:02.513 **** 2026-02-18 06:12:37.721938 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.721951 | orchestrator | 2026-02-18 06:12:37.721963 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:12:37.721976 | orchestrator | Wednesday 18 February 2026 06:12:14 +0000 (0:00:00.822) 0:21:03.336 **** 2026-02-18 06:12:37.721988 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722000 | orchestrator | 2026-02-18 06:12:37.722013 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:12:37.722104 | orchestrator | Wednesday 18 February 2026 06:12:15 +0000 (0:00:00.793) 0:21:04.130 **** 2026-02-18 06:12:37.722117 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722130 | orchestrator | 2026-02-18 06:12:37.722142 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:12:37.722155 | orchestrator | Wednesday 18 February 2026 06:12:16 +0000 (0:00:00.825) 0:21:04.956 **** 2026-02-18 06:12:37.722166 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722177 | orchestrator | 2026-02-18 06:12:37.722221 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:12:37.722234 | orchestrator | Wednesday 18 February 2026 06:12:16 +0000 (0:00:00.884) 0:21:05.840 **** 2026-02-18 06:12:37.722245 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722256 | orchestrator | 2026-02-18 06:12:37.722267 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:12:37.722278 | orchestrator | Wednesday 18 February 2026 06:12:17 +0000 (0:00:00.831) 0:21:06.671 **** 2026-02-18 06:12:37.722289 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722299 | orchestrator | 2026-02-18 06:12:37.722310 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:12:37.722321 | orchestrator | Wednesday 18 February 2026 06:12:19 +0000 (0:00:01.349) 0:21:08.021 **** 2026-02-18 06:12:37.722342 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722354 | orchestrator | 2026-02-18 06:12:37.722364 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:12:37.722375 | orchestrator | Wednesday 18 February 2026 06:12:19 +0000 (0:00:00.780) 0:21:08.802 **** 2026-02-18 06:12:37.722386 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722397 | orchestrator | 2026-02-18 06:12:37.722408 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:12:37.722420 | orchestrator | Wednesday 18 February 2026 06:12:20 +0000 (0:00:00.843) 0:21:09.645 **** 2026-02-18 06:12:37.722431 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722442 | orchestrator | 2026-02-18 06:12:37.722452 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:12:37.722463 | orchestrator | Wednesday 18 February 2026 06:12:21 +0000 (0:00:00.774) 0:21:10.420 **** 2026-02-18 06:12:37.722474 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722486 | orchestrator | 2026-02-18 06:12:37.722496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:12:37.722507 | orchestrator | Wednesday 18 February 2026 06:12:22 +0000 (0:00:00.819) 0:21:11.239 **** 2026-02-18 06:12:37.722518 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722529 | orchestrator | 2026-02-18 06:12:37.722558 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:12:37.722570 | orchestrator | Wednesday 18 February 2026 06:12:23 +0000 (0:00:00.788) 0:21:12.028 **** 2026-02-18 06:12:37.722581 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722592 | orchestrator | 2026-02-18 06:12:37.722603 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:12:37.722614 | orchestrator | Wednesday 18 February 2026 06:12:23 +0000 (0:00:00.759) 0:21:12.787 **** 2026-02-18 06:12:37.722625 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:12:37.722636 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:12:37.722646 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:12:37.722657 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722668 | orchestrator | 2026-02-18 06:12:37.722679 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:12:37.722696 | orchestrator | Wednesday 18 February 2026 06:12:25 +0000 (0:00:01.139) 0:21:13.927 **** 2026-02-18 06:12:37.722707 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:12:37.722718 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:12:37.722728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:12:37.722739 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722750 | orchestrator | 2026-02-18 06:12:37.722760 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:12:37.722771 | orchestrator | Wednesday 18 February 2026 06:12:26 +0000 (0:00:01.057) 0:21:14.985 **** 2026-02-18 06:12:37.722782 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:12:37.722792 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:12:37.722803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:12:37.722814 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722825 | orchestrator | 2026-02-18 06:12:37.722835 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:12:37.722846 | orchestrator | Wednesday 18 February 2026 06:12:27 +0000 (0:00:01.077) 0:21:16.062 **** 2026-02-18 06:12:37.722857 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722868 | orchestrator | 2026-02-18 06:12:37.722879 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:12:37.722890 | orchestrator | Wednesday 18 February 2026 06:12:27 +0000 (0:00:00.786) 0:21:16.848 **** 2026-02-18 06:12:37.722909 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-18 06:12:37.722920 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722931 | orchestrator | 2026-02-18 06:12:37.722942 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:12:37.722953 | orchestrator | Wednesday 18 February 2026 06:12:28 +0000 (0:00:00.913) 0:21:17.762 **** 2026-02-18 06:12:37.722964 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.722974 | orchestrator | 2026-02-18 06:12:37.722985 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:12:37.723000 | orchestrator | Wednesday 18 February 2026 06:12:29 +0000 (0:00:01.047) 0:21:18.809 **** 2026-02-18 06:12:37.723017 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 06:12:37.723034 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 06:12:37.723052 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 06:12:37.723070 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.723088 | orchestrator | 2026-02-18 06:12:37.723105 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-18 06:12:37.723116 | orchestrator | Wednesday 18 February 2026 06:12:31 +0000 (0:00:01.087) 0:21:19.897 **** 2026-02-18 06:12:37.723127 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.723137 | orchestrator | 2026-02-18 06:12:37.723148 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-18 06:12:37.723158 | orchestrator | Wednesday 18 February 2026 06:12:31 +0000 (0:00:00.777) 0:21:20.675 **** 2026-02-18 06:12:37.723169 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.723179 | orchestrator | 2026-02-18 06:12:37.723244 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-18 06:12:37.723267 | orchestrator | Wednesday 18 February 2026 06:12:32 +0000 (0:00:00.837) 0:21:21.512 **** 2026-02-18 06:12:37.723285 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.723303 | orchestrator | 2026-02-18 06:12:37.723314 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-18 06:12:37.723325 | orchestrator | Wednesday 18 February 2026 06:12:33 +0000 (0:00:00.795) 0:21:22.308 **** 2026-02-18 06:12:37.723335 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:12:37.723346 | orchestrator | 2026-02-18 06:12:37.723357 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-18 06:12:37.723367 | orchestrator | 2026-02-18 06:12:37.723378 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-18 06:12:37.723389 | orchestrator | Wednesday 18 February 2026 06:12:34 +0000 (0:00:01.025) 0:21:23.333 **** 2026-02-18 06:12:37.723400 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:12:37.723410 | orchestrator | 2026-02-18 06:12:37.723421 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:12:37.723432 | orchestrator | Wednesday 18 February 2026 06:12:35 +0000 (0:00:00.775) 0:21:24.109 **** 2026-02-18 06:12:37.723442 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:12:37.723453 | orchestrator | 2026-02-18 06:12:37.723464 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:12:37.723474 | orchestrator | Wednesday 18 February 2026 06:12:36 +0000 (0:00:00.816) 0:21:24.926 **** 2026-02-18 06:12:37.723485 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:12:37.723496 | orchestrator | 2026-02-18 06:12:37.723507 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:12:37.723517 | orchestrator | Wednesday 18 February 2026 06:12:36 +0000 (0:00:00.800) 0:21:25.726 **** 2026-02-18 06:12:37.723538 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502371 | orchestrator | 2026-02-18 06:13:09.502477 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:13:09.502492 | orchestrator | Wednesday 18 February 2026 06:12:37 +0000 (0:00:00.861) 0:21:26.588 **** 2026-02-18 06:13:09.502503 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502515 | orchestrator | 2026-02-18 06:13:09.502545 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:13:09.502556 | orchestrator | Wednesday 18 February 2026 06:12:38 +0000 (0:00:00.787) 0:21:27.375 **** 2026-02-18 06:13:09.502566 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502575 | orchestrator | 2026-02-18 06:13:09.502585 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:13:09.502595 | orchestrator | Wednesday 18 February 2026 06:12:39 +0000 (0:00:00.791) 0:21:28.167 **** 2026-02-18 06:13:09.502605 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502614 | orchestrator | 2026-02-18 06:13:09.502624 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:13:09.502647 | orchestrator | Wednesday 18 February 2026 06:12:40 +0000 (0:00:00.761) 0:21:28.928 **** 2026-02-18 06:13:09.502657 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502667 | orchestrator | 2026-02-18 06:13:09.502676 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:13:09.502686 | orchestrator | Wednesday 18 February 2026 06:12:40 +0000 (0:00:00.864) 0:21:29.793 **** 2026-02-18 06:13:09.502696 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502705 | orchestrator | 2026-02-18 06:13:09.502715 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:13:09.502724 | orchestrator | Wednesday 18 February 2026 06:12:41 +0000 (0:00:00.802) 0:21:30.595 **** 2026-02-18 06:13:09.502750 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502770 | orchestrator | 2026-02-18 06:13:09.502780 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:13:09.502790 | orchestrator | Wednesday 18 February 2026 06:12:42 +0000 (0:00:00.814) 0:21:31.409 **** 2026-02-18 06:13:09.502799 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502809 | orchestrator | 2026-02-18 06:13:09.502818 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:13:09.502828 | orchestrator | Wednesday 18 February 2026 06:12:43 +0000 (0:00:00.793) 0:21:32.203 **** 2026-02-18 06:13:09.502837 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502847 | orchestrator | 2026-02-18 06:13:09.502857 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:13:09.502867 | orchestrator | Wednesday 18 February 2026 06:12:44 +0000 (0:00:00.817) 0:21:33.021 **** 2026-02-18 06:13:09.502876 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502888 | orchestrator | 2026-02-18 06:13:09.502899 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:13:09.502910 | orchestrator | Wednesday 18 February 2026 06:12:44 +0000 (0:00:00.791) 0:21:33.812 **** 2026-02-18 06:13:09.502921 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502932 | orchestrator | 2026-02-18 06:13:09.502944 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:13:09.502955 | orchestrator | Wednesday 18 February 2026 06:12:45 +0000 (0:00:00.786) 0:21:34.598 **** 2026-02-18 06:13:09.502967 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.502978 | orchestrator | 2026-02-18 06:13:09.502989 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:13:09.503000 | orchestrator | Wednesday 18 February 2026 06:12:46 +0000 (0:00:00.771) 0:21:35.370 **** 2026-02-18 06:13:09.503011 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503022 | orchestrator | 2026-02-18 06:13:09.503033 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:13:09.503043 | orchestrator | Wednesday 18 February 2026 06:12:47 +0000 (0:00:00.773) 0:21:36.144 **** 2026-02-18 06:13:09.503054 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503065 | orchestrator | 2026-02-18 06:13:09.503077 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:13:09.503088 | orchestrator | Wednesday 18 February 2026 06:12:48 +0000 (0:00:00.820) 0:21:36.966 **** 2026-02-18 06:13:09.503099 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503118 | orchestrator | 2026-02-18 06:13:09.503129 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:13:09.503140 | orchestrator | Wednesday 18 February 2026 06:12:48 +0000 (0:00:00.816) 0:21:37.782 **** 2026-02-18 06:13:09.503151 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503162 | orchestrator | 2026-02-18 06:13:09.503174 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:13:09.503186 | orchestrator | Wednesday 18 February 2026 06:12:49 +0000 (0:00:00.798) 0:21:38.581 **** 2026-02-18 06:13:09.503197 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503208 | orchestrator | 2026-02-18 06:13:09.503220 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:13:09.503246 | orchestrator | Wednesday 18 February 2026 06:12:50 +0000 (0:00:00.794) 0:21:39.376 **** 2026-02-18 06:13:09.503256 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503266 | orchestrator | 2026-02-18 06:13:09.503275 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:13:09.503285 | orchestrator | Wednesday 18 February 2026 06:12:51 +0000 (0:00:00.778) 0:21:40.155 **** 2026-02-18 06:13:09.503294 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503304 | orchestrator | 2026-02-18 06:13:09.503313 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:13:09.503323 | orchestrator | Wednesday 18 February 2026 06:12:52 +0000 (0:00:00.800) 0:21:40.955 **** 2026-02-18 06:13:09.503332 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503342 | orchestrator | 2026-02-18 06:13:09.503351 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:13:09.503361 | orchestrator | Wednesday 18 February 2026 06:12:52 +0000 (0:00:00.827) 0:21:41.783 **** 2026-02-18 06:13:09.503371 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503380 | orchestrator | 2026-02-18 06:13:09.503406 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:13:09.503417 | orchestrator | Wednesday 18 February 2026 06:12:53 +0000 (0:00:00.805) 0:21:42.588 **** 2026-02-18 06:13:09.503426 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503436 | orchestrator | 2026-02-18 06:13:09.503446 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:13:09.503456 | orchestrator | Wednesday 18 February 2026 06:12:54 +0000 (0:00:00.779) 0:21:43.368 **** 2026-02-18 06:13:09.503465 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503475 | orchestrator | 2026-02-18 06:13:09.503485 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:13:09.503495 | orchestrator | Wednesday 18 February 2026 06:12:55 +0000 (0:00:00.770) 0:21:44.140 **** 2026-02-18 06:13:09.503505 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503514 | orchestrator | 2026-02-18 06:13:09.503524 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:13:09.503539 | orchestrator | Wednesday 18 February 2026 06:12:56 +0000 (0:00:00.783) 0:21:44.923 **** 2026-02-18 06:13:09.503549 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503559 | orchestrator | 2026-02-18 06:13:09.503569 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:13:09.503579 | orchestrator | Wednesday 18 February 2026 06:12:56 +0000 (0:00:00.795) 0:21:45.718 **** 2026-02-18 06:13:09.503588 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503598 | orchestrator | 2026-02-18 06:13:09.503608 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:13:09.503617 | orchestrator | Wednesday 18 February 2026 06:12:57 +0000 (0:00:00.815) 0:21:46.534 **** 2026-02-18 06:13:09.503627 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503637 | orchestrator | 2026-02-18 06:13:09.503646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:13:09.503656 | orchestrator | Wednesday 18 February 2026 06:12:58 +0000 (0:00:00.820) 0:21:47.354 **** 2026-02-18 06:13:09.503666 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503683 | orchestrator | 2026-02-18 06:13:09.503693 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:13:09.503702 | orchestrator | Wednesday 18 February 2026 06:12:59 +0000 (0:00:00.807) 0:21:48.162 **** 2026-02-18 06:13:09.503712 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503722 | orchestrator | 2026-02-18 06:13:09.503732 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:13:09.503741 | orchestrator | Wednesday 18 February 2026 06:13:00 +0000 (0:00:00.777) 0:21:48.939 **** 2026-02-18 06:13:09.503751 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503761 | orchestrator | 2026-02-18 06:13:09.503771 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:13:09.503781 | orchestrator | Wednesday 18 February 2026 06:13:00 +0000 (0:00:00.808) 0:21:49.748 **** 2026-02-18 06:13:09.503790 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503800 | orchestrator | 2026-02-18 06:13:09.503810 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:13:09.503820 | orchestrator | Wednesday 18 February 2026 06:13:01 +0000 (0:00:00.776) 0:21:50.524 **** 2026-02-18 06:13:09.503829 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503839 | orchestrator | 2026-02-18 06:13:09.503849 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:13:09.503859 | orchestrator | Wednesday 18 February 2026 06:13:02 +0000 (0:00:00.786) 0:21:51.311 **** 2026-02-18 06:13:09.503868 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503878 | orchestrator | 2026-02-18 06:13:09.503888 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:13:09.503897 | orchestrator | Wednesday 18 February 2026 06:13:03 +0000 (0:00:00.760) 0:21:52.072 **** 2026-02-18 06:13:09.503907 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503917 | orchestrator | 2026-02-18 06:13:09.503927 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:13:09.503936 | orchestrator | Wednesday 18 February 2026 06:13:03 +0000 (0:00:00.769) 0:21:52.841 **** 2026-02-18 06:13:09.503946 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503956 | orchestrator | 2026-02-18 06:13:09.503966 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:13:09.503975 | orchestrator | Wednesday 18 February 2026 06:13:04 +0000 (0:00:00.778) 0:21:53.620 **** 2026-02-18 06:13:09.503985 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.503995 | orchestrator | 2026-02-18 06:13:09.504005 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:13:09.504016 | orchestrator | Wednesday 18 February 2026 06:13:05 +0000 (0:00:00.787) 0:21:54.407 **** 2026-02-18 06:13:09.504026 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.504036 | orchestrator | 2026-02-18 06:13:09.504045 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:13:09.504055 | orchestrator | Wednesday 18 February 2026 06:13:06 +0000 (0:00:00.801) 0:21:55.209 **** 2026-02-18 06:13:09.504065 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.504074 | orchestrator | 2026-02-18 06:13:09.504084 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:13:09.504094 | orchestrator | Wednesday 18 February 2026 06:13:07 +0000 (0:00:00.785) 0:21:55.995 **** 2026-02-18 06:13:09.504104 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.504113 | orchestrator | 2026-02-18 06:13:09.504123 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:13:09.504133 | orchestrator | Wednesday 18 February 2026 06:13:07 +0000 (0:00:00.794) 0:21:56.790 **** 2026-02-18 06:13:09.504142 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.504152 | orchestrator | 2026-02-18 06:13:09.504162 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:13:09.504177 | orchestrator | Wednesday 18 February 2026 06:13:08 +0000 (0:00:00.792) 0:21:57.582 **** 2026-02-18 06:13:09.504187 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:13:09.504197 | orchestrator | 2026-02-18 06:13:09.504212 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:14:01.790190 | orchestrator | Wednesday 18 February 2026 06:13:09 +0000 (0:00:00.785) 0:21:58.367 **** 2026-02-18 06:14:01.790368 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790392 | orchestrator | 2026-02-18 06:14:01.790411 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:14:01.790430 | orchestrator | Wednesday 18 February 2026 06:13:10 +0000 (0:00:00.791) 0:21:59.159 **** 2026-02-18 06:14:01.790442 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790453 | orchestrator | 2026-02-18 06:14:01.790465 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:14:01.790477 | orchestrator | Wednesday 18 February 2026 06:13:11 +0000 (0:00:00.897) 0:22:00.057 **** 2026-02-18 06:14:01.790488 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790498 | orchestrator | 2026-02-18 06:14:01.790509 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:14:01.790536 | orchestrator | Wednesday 18 February 2026 06:13:11 +0000 (0:00:00.802) 0:22:00.860 **** 2026-02-18 06:14:01.790547 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790558 | orchestrator | 2026-02-18 06:14:01.790569 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:14:01.790580 | orchestrator | Wednesday 18 February 2026 06:13:12 +0000 (0:00:00.947) 0:22:01.807 **** 2026-02-18 06:14:01.790591 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790602 | orchestrator | 2026-02-18 06:14:01.790612 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:14:01.790623 | orchestrator | Wednesday 18 February 2026 06:13:13 +0000 (0:00:00.828) 0:22:02.636 **** 2026-02-18 06:14:01.790634 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790645 | orchestrator | 2026-02-18 06:14:01.790657 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:14:01.790670 | orchestrator | Wednesday 18 February 2026 06:13:14 +0000 (0:00:00.826) 0:22:03.462 **** 2026-02-18 06:14:01.790682 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790695 | orchestrator | 2026-02-18 06:14:01.790708 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:14:01.790721 | orchestrator | Wednesday 18 February 2026 06:13:15 +0000 (0:00:00.803) 0:22:04.265 **** 2026-02-18 06:14:01.790734 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790746 | orchestrator | 2026-02-18 06:14:01.790759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:14:01.790772 | orchestrator | Wednesday 18 February 2026 06:13:16 +0000 (0:00:00.813) 0:22:05.079 **** 2026-02-18 06:14:01.790784 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790797 | orchestrator | 2026-02-18 06:14:01.790809 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:14:01.790821 | orchestrator | Wednesday 18 February 2026 06:13:17 +0000 (0:00:00.798) 0:22:05.878 **** 2026-02-18 06:14:01.790834 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790846 | orchestrator | 2026-02-18 06:14:01.790860 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:14:01.790872 | orchestrator | Wednesday 18 February 2026 06:13:17 +0000 (0:00:00.814) 0:22:06.693 **** 2026-02-18 06:14:01.790885 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:14:01.790898 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:14:01.790911 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:14:01.790936 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.790948 | orchestrator | 2026-02-18 06:14:01.790959 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:14:01.791001 | orchestrator | Wednesday 18 February 2026 06:13:19 +0000 (0:00:01.474) 0:22:08.168 **** 2026-02-18 06:14:01.791022 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:14:01.791042 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:14:01.791060 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:14:01.791072 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791083 | orchestrator | 2026-02-18 06:14:01.791094 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:14:01.791105 | orchestrator | Wednesday 18 February 2026 06:13:20 +0000 (0:00:01.066) 0:22:09.234 **** 2026-02-18 06:14:01.791116 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:14:01.791127 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:14:01.791138 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:14:01.791148 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791159 | orchestrator | 2026-02-18 06:14:01.791170 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:14:01.791181 | orchestrator | Wednesday 18 February 2026 06:13:21 +0000 (0:00:01.133) 0:22:10.368 **** 2026-02-18 06:14:01.791191 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791202 | orchestrator | 2026-02-18 06:14:01.791213 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:14:01.791223 | orchestrator | Wednesday 18 February 2026 06:13:22 +0000 (0:00:00.825) 0:22:11.193 **** 2026-02-18 06:14:01.791235 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-18 06:14:01.791245 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791256 | orchestrator | 2026-02-18 06:14:01.791267 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:14:01.791303 | orchestrator | Wednesday 18 February 2026 06:13:23 +0000 (0:00:00.912) 0:22:12.105 **** 2026-02-18 06:14:01.791323 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791343 | orchestrator | 2026-02-18 06:14:01.791361 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:14:01.791379 | orchestrator | Wednesday 18 February 2026 06:13:24 +0000 (0:00:00.824) 0:22:12.930 **** 2026-02-18 06:14:01.791391 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 06:14:01.791422 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 06:14:01.791433 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 06:14:01.791444 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791455 | orchestrator | 2026-02-18 06:14:01.791465 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-18 06:14:01.791476 | orchestrator | Wednesday 18 February 2026 06:13:25 +0000 (0:00:01.130) 0:22:14.061 **** 2026-02-18 06:14:01.791487 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791498 | orchestrator | 2026-02-18 06:14:01.791508 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-18 06:14:01.791519 | orchestrator | Wednesday 18 February 2026 06:13:25 +0000 (0:00:00.790) 0:22:14.851 **** 2026-02-18 06:14:01.791530 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791540 | orchestrator | 2026-02-18 06:14:01.791551 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-18 06:14:01.791569 | orchestrator | Wednesday 18 February 2026 06:13:26 +0000 (0:00:00.801) 0:22:15.653 **** 2026-02-18 06:14:01.791580 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791591 | orchestrator | 2026-02-18 06:14:01.791602 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-18 06:14:01.791613 | orchestrator | Wednesday 18 February 2026 06:13:27 +0000 (0:00:00.787) 0:22:16.440 **** 2026-02-18 06:14:01.791624 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:14:01.791635 | orchestrator | 2026-02-18 06:14:01.791646 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-18 06:14:01.791666 | orchestrator | 2026-02-18 06:14:01.791677 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-18 06:14:01.791688 | orchestrator | Wednesday 18 February 2026 06:13:29 +0000 (0:00:01.876) 0:22:18.317 **** 2026-02-18 06:14:01.791699 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:14:01.791710 | orchestrator | 2026-02-18 06:14:01.791721 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-18 06:14:01.791732 | orchestrator | Wednesday 18 February 2026 06:13:42 +0000 (0:00:12.841) 0:22:31.159 **** 2026-02-18 06:14:01.791742 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:14:01.791753 | orchestrator | 2026-02-18 06:14:01.791764 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:14:01.791775 | orchestrator | Wednesday 18 February 2026 06:13:44 +0000 (0:00:02.537) 0:22:33.697 **** 2026-02-18 06:14:01.791786 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-18 06:14:01.791797 | orchestrator | 2026-02-18 06:14:01.791807 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:14:01.791818 | orchestrator | Wednesday 18 February 2026 06:13:45 +0000 (0:00:01.141) 0:22:34.838 **** 2026-02-18 06:14:01.791829 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.791840 | orchestrator | 2026-02-18 06:14:01.791851 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:14:01.791862 | orchestrator | Wednesday 18 February 2026 06:13:47 +0000 (0:00:01.492) 0:22:36.330 **** 2026-02-18 06:14:01.791873 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.791884 | orchestrator | 2026-02-18 06:14:01.791895 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:14:01.791905 | orchestrator | Wednesday 18 February 2026 06:13:48 +0000 (0:00:01.134) 0:22:37.465 **** 2026-02-18 06:14:01.791916 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.791927 | orchestrator | 2026-02-18 06:14:01.791938 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:14:01.791949 | orchestrator | Wednesday 18 February 2026 06:13:50 +0000 (0:00:01.482) 0:22:38.948 **** 2026-02-18 06:14:01.791960 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.791971 | orchestrator | 2026-02-18 06:14:01.791982 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:14:01.791993 | orchestrator | Wednesday 18 February 2026 06:13:51 +0000 (0:00:01.156) 0:22:40.104 **** 2026-02-18 06:14:01.792004 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.792014 | orchestrator | 2026-02-18 06:14:01.792025 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:14:01.792036 | orchestrator | Wednesday 18 February 2026 06:13:52 +0000 (0:00:01.176) 0:22:41.281 **** 2026-02-18 06:14:01.792047 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.792058 | orchestrator | 2026-02-18 06:14:01.792069 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:14:01.792081 | orchestrator | Wednesday 18 February 2026 06:13:53 +0000 (0:00:01.163) 0:22:42.445 **** 2026-02-18 06:14:01.792092 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:01.792103 | orchestrator | 2026-02-18 06:14:01.792114 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:14:01.792125 | orchestrator | Wednesday 18 February 2026 06:13:54 +0000 (0:00:01.166) 0:22:43.611 **** 2026-02-18 06:14:01.792135 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.792146 | orchestrator | 2026-02-18 06:14:01.792157 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:14:01.792168 | orchestrator | Wednesday 18 February 2026 06:13:55 +0000 (0:00:01.218) 0:22:44.829 **** 2026-02-18 06:14:01.792179 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:14:01.792190 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:14:01.792201 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:14:01.792218 | orchestrator | 2026-02-18 06:14:01.792229 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:14:01.792240 | orchestrator | Wednesday 18 February 2026 06:13:57 +0000 (0:00:01.722) 0:22:46.552 **** 2026-02-18 06:14:01.792251 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:01.792262 | orchestrator | 2026-02-18 06:14:01.792272 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:14:01.792310 | orchestrator | Wednesday 18 February 2026 06:13:58 +0000 (0:00:01.253) 0:22:47.806 **** 2026-02-18 06:14:01.792322 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:14:01.792341 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:14:24.893128 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:14:24.893239 | orchestrator | 2026-02-18 06:14:24.893256 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:14:24.893269 | orchestrator | Wednesday 18 February 2026 06:14:01 +0000 (0:00:02.849) 0:22:50.656 **** 2026-02-18 06:14:24.893281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 06:14:24.893293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 06:14:24.893351 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 06:14:24.893364 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.893376 | orchestrator | 2026-02-18 06:14:24.893387 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:14:24.893416 | orchestrator | Wednesday 18 February 2026 06:14:03 +0000 (0:00:01.476) 0:22:52.132 **** 2026-02-18 06:14:24.893429 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:14:24.893444 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:14:24.893455 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:14:24.893466 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.893477 | orchestrator | 2026-02-18 06:14:24.893488 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:14:24.893500 | orchestrator | Wednesday 18 February 2026 06:14:04 +0000 (0:00:01.678) 0:22:53.810 **** 2026-02-18 06:14:24.893513 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:24.893528 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:24.893539 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:24.893575 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.893587 | orchestrator | 2026-02-18 06:14:24.893598 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:14:24.893609 | orchestrator | Wednesday 18 February 2026 06:14:06 +0000 (0:00:01.203) 0:22:55.014 **** 2026-02-18 06:14:24.893623 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:13:59.471587', 'end': '2026-02-18 06:13:59.513288', 'delta': '0:00:00.041701', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:14:24.893655 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:14:00.024995', 'end': '2026-02-18 06:14:00.073243', 'delta': '0:00:00.048248', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:14:24.893673 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:14:00.604072', 'end': '2026-02-18 06:14:00.645652', 'delta': '0:00:00.041580', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:14:24.893687 | orchestrator | 2026-02-18 06:14:24.893699 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:14:24.893712 | orchestrator | Wednesday 18 February 2026 06:14:07 +0000 (0:00:01.286) 0:22:56.301 **** 2026-02-18 06:14:24.893724 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:24.893738 | orchestrator | 2026-02-18 06:14:24.893750 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:14:24.893762 | orchestrator | Wednesday 18 February 2026 06:14:08 +0000 (0:00:01.250) 0:22:57.551 **** 2026-02-18 06:14:24.893775 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.893787 | orchestrator | 2026-02-18 06:14:24.893800 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:14:24.893812 | orchestrator | Wednesday 18 February 2026 06:14:09 +0000 (0:00:01.281) 0:22:58.833 **** 2026-02-18 06:14:24.893824 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:24.893837 | orchestrator | 2026-02-18 06:14:24.893849 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:14:24.893861 | orchestrator | Wednesday 18 February 2026 06:14:11 +0000 (0:00:01.213) 0:23:00.047 **** 2026-02-18 06:14:24.893874 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:24.893886 | orchestrator | 2026-02-18 06:14:24.893898 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:14:24.893919 | orchestrator | Wednesday 18 February 2026 06:14:13 +0000 (0:00:02.001) 0:23:02.048 **** 2026-02-18 06:14:24.893932 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:24.893944 | orchestrator | 2026-02-18 06:14:24.893957 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:14:24.893970 | orchestrator | Wednesday 18 February 2026 06:14:14 +0000 (0:00:01.158) 0:23:03.207 **** 2026-02-18 06:14:24.893982 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.893994 | orchestrator | 2026-02-18 06:14:24.894007 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:14:24.894081 | orchestrator | Wednesday 18 February 2026 06:14:15 +0000 (0:00:01.188) 0:23:04.396 **** 2026-02-18 06:14:24.894093 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894104 | orchestrator | 2026-02-18 06:14:24.894115 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:14:24.894126 | orchestrator | Wednesday 18 February 2026 06:14:16 +0000 (0:00:01.207) 0:23:05.604 **** 2026-02-18 06:14:24.894136 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894147 | orchestrator | 2026-02-18 06:14:24.894158 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:14:24.894169 | orchestrator | Wednesday 18 February 2026 06:14:17 +0000 (0:00:01.143) 0:23:06.748 **** 2026-02-18 06:14:24.894179 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894190 | orchestrator | 2026-02-18 06:14:24.894201 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:14:24.894212 | orchestrator | Wednesday 18 February 2026 06:14:19 +0000 (0:00:01.183) 0:23:07.931 **** 2026-02-18 06:14:24.894223 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894233 | orchestrator | 2026-02-18 06:14:24.894244 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:14:24.894255 | orchestrator | Wednesday 18 February 2026 06:14:20 +0000 (0:00:01.160) 0:23:09.092 **** 2026-02-18 06:14:24.894266 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894277 | orchestrator | 2026-02-18 06:14:24.894288 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:14:24.894298 | orchestrator | Wednesday 18 February 2026 06:14:21 +0000 (0:00:01.176) 0:23:10.268 **** 2026-02-18 06:14:24.894332 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894343 | orchestrator | 2026-02-18 06:14:24.894354 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:14:24.894364 | orchestrator | Wednesday 18 February 2026 06:14:22 +0000 (0:00:01.126) 0:23:11.395 **** 2026-02-18 06:14:24.894375 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894386 | orchestrator | 2026-02-18 06:14:24.894397 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:14:24.894407 | orchestrator | Wednesday 18 February 2026 06:14:23 +0000 (0:00:01.193) 0:23:12.589 **** 2026-02-18 06:14:24.894418 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:24.894429 | orchestrator | 2026-02-18 06:14:24.894448 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:14:27.405435 | orchestrator | Wednesday 18 February 2026 06:14:24 +0000 (0:00:01.169) 0:23:13.758 **** 2026-02-18 06:14:27.405542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:14:27.405644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:14:27.405734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:14:27.405756 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:27.405769 | orchestrator | 2026-02-18 06:14:27.405782 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:14:27.405793 | orchestrator | Wednesday 18 February 2026 06:14:26 +0000 (0:00:01.276) 0:23:15.035 **** 2026-02-18 06:14:27.405806 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:27.405819 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:27.405838 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193305 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193497 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193515 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193526 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193563 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193585 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193596 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:14:38.193607 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:38.193619 | orchestrator | 2026-02-18 06:14:38.193631 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:14:38.193642 | orchestrator | Wednesday 18 February 2026 06:14:27 +0000 (0:00:01.240) 0:23:16.276 **** 2026-02-18 06:14:38.193652 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:38.193662 | orchestrator | 2026-02-18 06:14:38.193672 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:14:38.193682 | orchestrator | Wednesday 18 February 2026 06:14:28 +0000 (0:00:01.512) 0:23:17.789 **** 2026-02-18 06:14:38.193692 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:38.193701 | orchestrator | 2026-02-18 06:14:38.193711 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:14:38.193721 | orchestrator | Wednesday 18 February 2026 06:14:30 +0000 (0:00:01.242) 0:23:19.032 **** 2026-02-18 06:14:38.193731 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:14:38.193741 | orchestrator | 2026-02-18 06:14:38.193750 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:14:38.193760 | orchestrator | Wednesday 18 February 2026 06:14:31 +0000 (0:00:01.466) 0:23:20.499 **** 2026-02-18 06:14:38.193770 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:38.193779 | orchestrator | 2026-02-18 06:14:38.193789 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:14:38.193799 | orchestrator | Wednesday 18 February 2026 06:14:32 +0000 (0:00:01.181) 0:23:21.681 **** 2026-02-18 06:14:38.193809 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:38.193818 | orchestrator | 2026-02-18 06:14:38.193828 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:14:38.193840 | orchestrator | Wednesday 18 February 2026 06:14:34 +0000 (0:00:01.304) 0:23:22.986 **** 2026-02-18 06:14:38.193851 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:38.193863 | orchestrator | 2026-02-18 06:14:38.193874 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:14:38.193890 | orchestrator | Wednesday 18 February 2026 06:14:35 +0000 (0:00:01.169) 0:23:24.155 **** 2026-02-18 06:14:38.193902 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:14:38.193913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 06:14:38.193924 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 06:14:38.193935 | orchestrator | 2026-02-18 06:14:38.193945 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:14:38.193956 | orchestrator | Wednesday 18 February 2026 06:14:36 +0000 (0:00:01.661) 0:23:25.817 **** 2026-02-18 06:14:38.193967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 06:14:38.193978 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 06:14:38.193989 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 06:14:38.194000 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:14:38.194011 | orchestrator | 2026-02-18 06:14:38.194085 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:15:23.132805 | orchestrator | Wednesday 18 February 2026 06:14:38 +0000 (0:00:01.239) 0:23:27.057 **** 2026-02-18 06:15:23.132914 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.132930 | orchestrator | 2026-02-18 06:15:23.132941 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:15:23.132952 | orchestrator | Wednesday 18 February 2026 06:14:39 +0000 (0:00:01.138) 0:23:28.195 **** 2026-02-18 06:15:23.132963 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:15:23.132973 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:15:23.132983 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:15:23.133009 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:15:23.133019 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:15:23.133029 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:15:23.133039 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:15:23.133048 | orchestrator | 2026-02-18 06:15:23.133058 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:15:23.133067 | orchestrator | Wednesday 18 February 2026 06:14:41 +0000 (0:00:02.264) 0:23:30.459 **** 2026-02-18 06:15:23.133077 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:15:23.133087 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:15:23.133096 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:15:23.133106 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:15:23.133115 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:15:23.133125 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:15:23.133135 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:15:23.133144 | orchestrator | 2026-02-18 06:15:23.133154 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:15:23.133164 | orchestrator | Wednesday 18 February 2026 06:14:44 +0000 (0:00:02.730) 0:23:33.189 **** 2026-02-18 06:15:23.133174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-18 06:15:23.133185 | orchestrator | 2026-02-18 06:15:23.133195 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:15:23.133204 | orchestrator | Wednesday 18 February 2026 06:14:45 +0000 (0:00:01.133) 0:23:34.324 **** 2026-02-18 06:15:23.133214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-18 06:15:23.133243 | orchestrator | 2026-02-18 06:15:23.133253 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:15:23.133263 | orchestrator | Wednesday 18 February 2026 06:14:46 +0000 (0:00:01.146) 0:23:35.470 **** 2026-02-18 06:15:23.133273 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.133282 | orchestrator | 2026-02-18 06:15:23.133292 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:15:23.133302 | orchestrator | Wednesday 18 February 2026 06:14:48 +0000 (0:00:01.548) 0:23:37.018 **** 2026-02-18 06:15:23.133312 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133322 | orchestrator | 2026-02-18 06:15:23.133332 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:15:23.133341 | orchestrator | Wednesday 18 February 2026 06:14:49 +0000 (0:00:01.139) 0:23:38.158 **** 2026-02-18 06:15:23.133351 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133380 | orchestrator | 2026-02-18 06:15:23.133391 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:15:23.133401 | orchestrator | Wednesday 18 February 2026 06:14:50 +0000 (0:00:01.161) 0:23:39.319 **** 2026-02-18 06:15:23.133410 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133420 | orchestrator | 2026-02-18 06:15:23.133430 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:15:23.133439 | orchestrator | Wednesday 18 February 2026 06:14:51 +0000 (0:00:01.165) 0:23:40.485 **** 2026-02-18 06:15:23.133449 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.133458 | orchestrator | 2026-02-18 06:15:23.133468 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:15:23.133477 | orchestrator | Wednesday 18 February 2026 06:14:53 +0000 (0:00:01.545) 0:23:42.031 **** 2026-02-18 06:15:23.133487 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133497 | orchestrator | 2026-02-18 06:15:23.133506 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:15:23.133516 | orchestrator | Wednesday 18 February 2026 06:14:54 +0000 (0:00:01.157) 0:23:43.188 **** 2026-02-18 06:15:23.133525 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133535 | orchestrator | 2026-02-18 06:15:23.133544 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:15:23.133554 | orchestrator | Wednesday 18 February 2026 06:14:55 +0000 (0:00:01.128) 0:23:44.318 **** 2026-02-18 06:15:23.133563 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.133573 | orchestrator | 2026-02-18 06:15:23.133582 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:15:23.133592 | orchestrator | Wednesday 18 February 2026 06:14:57 +0000 (0:00:01.626) 0:23:45.944 **** 2026-02-18 06:15:23.133601 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.133611 | orchestrator | 2026-02-18 06:15:23.133620 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:15:23.133646 | orchestrator | Wednesday 18 February 2026 06:14:58 +0000 (0:00:01.516) 0:23:47.460 **** 2026-02-18 06:15:23.133657 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133667 | orchestrator | 2026-02-18 06:15:23.133676 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:15:23.133686 | orchestrator | Wednesday 18 February 2026 06:14:59 +0000 (0:00:01.129) 0:23:48.590 **** 2026-02-18 06:15:23.133696 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.133705 | orchestrator | 2026-02-18 06:15:23.133715 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:15:23.133724 | orchestrator | Wednesday 18 February 2026 06:15:00 +0000 (0:00:01.224) 0:23:49.814 **** 2026-02-18 06:15:23.133734 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133743 | orchestrator | 2026-02-18 06:15:23.133758 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:15:23.133768 | orchestrator | Wednesday 18 February 2026 06:15:02 +0000 (0:00:01.132) 0:23:50.947 **** 2026-02-18 06:15:23.133785 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133795 | orchestrator | 2026-02-18 06:15:23.133804 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:15:23.133814 | orchestrator | Wednesday 18 February 2026 06:15:03 +0000 (0:00:01.137) 0:23:52.085 **** 2026-02-18 06:15:23.133824 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133833 | orchestrator | 2026-02-18 06:15:23.133843 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:15:23.133852 | orchestrator | Wednesday 18 February 2026 06:15:04 +0000 (0:00:01.217) 0:23:53.302 **** 2026-02-18 06:15:23.133862 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133872 | orchestrator | 2026-02-18 06:15:23.133881 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:15:23.133891 | orchestrator | Wednesday 18 February 2026 06:15:05 +0000 (0:00:01.209) 0:23:54.512 **** 2026-02-18 06:15:23.133901 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.133910 | orchestrator | 2026-02-18 06:15:23.133920 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:15:23.133930 | orchestrator | Wednesday 18 February 2026 06:15:06 +0000 (0:00:01.148) 0:23:55.661 **** 2026-02-18 06:15:23.133939 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.133949 | orchestrator | 2026-02-18 06:15:23.133958 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:15:23.133968 | orchestrator | Wednesday 18 February 2026 06:15:07 +0000 (0:00:01.150) 0:23:56.811 **** 2026-02-18 06:15:23.133977 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.133987 | orchestrator | 2026-02-18 06:15:23.133997 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:15:23.134006 | orchestrator | Wednesday 18 February 2026 06:15:09 +0000 (0:00:01.156) 0:23:57.967 **** 2026-02-18 06:15:23.134074 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:15:23.134087 | orchestrator | 2026-02-18 06:15:23.134096 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:15:23.134106 | orchestrator | Wednesday 18 February 2026 06:15:10 +0000 (0:00:01.136) 0:23:59.104 **** 2026-02-18 06:15:23.134116 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134125 | orchestrator | 2026-02-18 06:15:23.134135 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:15:23.134144 | orchestrator | Wednesday 18 February 2026 06:15:11 +0000 (0:00:01.199) 0:24:00.304 **** 2026-02-18 06:15:23.134154 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134164 | orchestrator | 2026-02-18 06:15:23.134173 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:15:23.134183 | orchestrator | Wednesday 18 February 2026 06:15:12 +0000 (0:00:01.124) 0:24:01.428 **** 2026-02-18 06:15:23.134193 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134202 | orchestrator | 2026-02-18 06:15:23.134212 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:15:23.134221 | orchestrator | Wednesday 18 February 2026 06:15:13 +0000 (0:00:01.207) 0:24:02.636 **** 2026-02-18 06:15:23.134231 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134241 | orchestrator | 2026-02-18 06:15:23.134250 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:15:23.134260 | orchestrator | Wednesday 18 February 2026 06:15:14 +0000 (0:00:01.136) 0:24:03.773 **** 2026-02-18 06:15:23.134270 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134279 | orchestrator | 2026-02-18 06:15:23.134289 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:15:23.134298 | orchestrator | Wednesday 18 February 2026 06:15:16 +0000 (0:00:01.146) 0:24:04.920 **** 2026-02-18 06:15:23.134308 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134318 | orchestrator | 2026-02-18 06:15:23.134328 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:15:23.134337 | orchestrator | Wednesday 18 February 2026 06:15:17 +0000 (0:00:01.179) 0:24:06.099 **** 2026-02-18 06:15:23.134355 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134380 | orchestrator | 2026-02-18 06:15:23.134390 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:15:23.134400 | orchestrator | Wednesday 18 February 2026 06:15:18 +0000 (0:00:01.201) 0:24:07.300 **** 2026-02-18 06:15:23.134410 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134419 | orchestrator | 2026-02-18 06:15:23.134429 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:15:23.134439 | orchestrator | Wednesday 18 February 2026 06:15:19 +0000 (0:00:01.195) 0:24:08.496 **** 2026-02-18 06:15:23.134448 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134458 | orchestrator | 2026-02-18 06:15:23.134468 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:15:23.134477 | orchestrator | Wednesday 18 February 2026 06:15:20 +0000 (0:00:01.147) 0:24:09.644 **** 2026-02-18 06:15:23.134487 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134497 | orchestrator | 2026-02-18 06:15:23.134506 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:15:23.134516 | orchestrator | Wednesday 18 February 2026 06:15:21 +0000 (0:00:01.193) 0:24:10.837 **** 2026-02-18 06:15:23.134526 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:15:23.134536 | orchestrator | 2026-02-18 06:15:23.134553 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:16:12.725529 | orchestrator | Wednesday 18 February 2026 06:15:23 +0000 (0:00:01.156) 0:24:11.994 **** 2026-02-18 06:16:12.725676 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.725705 | orchestrator | 2026-02-18 06:16:12.725727 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:16:12.725747 | orchestrator | Wednesday 18 February 2026 06:15:24 +0000 (0:00:01.151) 0:24:13.146 **** 2026-02-18 06:16:12.725766 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:16:12.725785 | orchestrator | 2026-02-18 06:16:12.725805 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:16:12.725844 | orchestrator | Wednesday 18 February 2026 06:15:26 +0000 (0:00:01.994) 0:24:15.141 **** 2026-02-18 06:16:12.725863 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:16:12.725882 | orchestrator | 2026-02-18 06:16:12.725900 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:16:12.725918 | orchestrator | Wednesday 18 February 2026 06:15:28 +0000 (0:00:02.461) 0:24:17.602 **** 2026-02-18 06:16:12.725938 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-18 06:16:12.725957 | orchestrator | 2026-02-18 06:16:12.725975 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:16:12.725994 | orchestrator | Wednesday 18 February 2026 06:15:29 +0000 (0:00:01.139) 0:24:18.741 **** 2026-02-18 06:16:12.726077 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726103 | orchestrator | 2026-02-18 06:16:12.726123 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:16:12.726141 | orchestrator | Wednesday 18 February 2026 06:15:30 +0000 (0:00:01.126) 0:24:19.868 **** 2026-02-18 06:16:12.726159 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726179 | orchestrator | 2026-02-18 06:16:12.726200 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:16:12.726220 | orchestrator | Wednesday 18 February 2026 06:15:32 +0000 (0:00:01.123) 0:24:20.991 **** 2026-02-18 06:16:12.726240 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:16:12.726259 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:16:12.726278 | orchestrator | 2026-02-18 06:16:12.726297 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:16:12.726316 | orchestrator | Wednesday 18 February 2026 06:15:33 +0000 (0:00:01.863) 0:24:22.854 **** 2026-02-18 06:16:12.726330 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:16:12.726367 | orchestrator | 2026-02-18 06:16:12.726379 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:16:12.726390 | orchestrator | Wednesday 18 February 2026 06:15:35 +0000 (0:00:01.523) 0:24:24.378 **** 2026-02-18 06:16:12.726401 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726477 | orchestrator | 2026-02-18 06:16:12.726491 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:16:12.726502 | orchestrator | Wednesday 18 February 2026 06:15:36 +0000 (0:00:01.163) 0:24:25.541 **** 2026-02-18 06:16:12.726513 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726524 | orchestrator | 2026-02-18 06:16:12.726535 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:16:12.726546 | orchestrator | Wednesday 18 February 2026 06:15:37 +0000 (0:00:01.142) 0:24:26.683 **** 2026-02-18 06:16:12.726558 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726569 | orchestrator | 2026-02-18 06:16:12.726579 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:16:12.726590 | orchestrator | Wednesday 18 February 2026 06:15:38 +0000 (0:00:01.133) 0:24:27.817 **** 2026-02-18 06:16:12.726601 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-18 06:16:12.726612 | orchestrator | 2026-02-18 06:16:12.726623 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:16:12.726634 | orchestrator | Wednesday 18 February 2026 06:15:40 +0000 (0:00:01.181) 0:24:28.999 **** 2026-02-18 06:16:12.726645 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:16:12.726656 | orchestrator | 2026-02-18 06:16:12.726667 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:16:12.726678 | orchestrator | Wednesday 18 February 2026 06:15:41 +0000 (0:00:01.791) 0:24:30.791 **** 2026-02-18 06:16:12.726689 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:16:12.726700 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:16:12.726711 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:16:12.726722 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726733 | orchestrator | 2026-02-18 06:16:12.726743 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:16:12.726754 | orchestrator | Wednesday 18 February 2026 06:15:43 +0000 (0:00:01.193) 0:24:31.984 **** 2026-02-18 06:16:12.726765 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726776 | orchestrator | 2026-02-18 06:16:12.726787 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:16:12.726798 | orchestrator | Wednesday 18 February 2026 06:15:44 +0000 (0:00:01.137) 0:24:33.122 **** 2026-02-18 06:16:12.726809 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726820 | orchestrator | 2026-02-18 06:16:12.726831 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:16:12.726842 | orchestrator | Wednesday 18 February 2026 06:15:45 +0000 (0:00:01.238) 0:24:34.360 **** 2026-02-18 06:16:12.726853 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726864 | orchestrator | 2026-02-18 06:16:12.726874 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:16:12.726885 | orchestrator | Wednesday 18 February 2026 06:15:46 +0000 (0:00:01.141) 0:24:35.501 **** 2026-02-18 06:16:12.726896 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726907 | orchestrator | 2026-02-18 06:16:12.726940 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:16:12.726953 | orchestrator | Wednesday 18 February 2026 06:15:47 +0000 (0:00:01.192) 0:24:36.693 **** 2026-02-18 06:16:12.726963 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.726974 | orchestrator | 2026-02-18 06:16:12.726985 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:16:12.726997 | orchestrator | Wednesday 18 February 2026 06:15:48 +0000 (0:00:01.175) 0:24:37.869 **** 2026-02-18 06:16:12.727016 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:16:12.727027 | orchestrator | 2026-02-18 06:16:12.727038 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:16:12.727050 | orchestrator | Wednesday 18 February 2026 06:15:51 +0000 (0:00:02.668) 0:24:40.538 **** 2026-02-18 06:16:12.727061 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:16:12.727072 | orchestrator | 2026-02-18 06:16:12.727083 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:16:12.727095 | orchestrator | Wednesday 18 February 2026 06:15:52 +0000 (0:00:01.146) 0:24:41.684 **** 2026-02-18 06:16:12.727106 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-18 06:16:12.727116 | orchestrator | 2026-02-18 06:16:12.727128 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:16:12.727138 | orchestrator | Wednesday 18 February 2026 06:15:53 +0000 (0:00:01.153) 0:24:42.838 **** 2026-02-18 06:16:12.727149 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727160 | orchestrator | 2026-02-18 06:16:12.727171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:16:12.727182 | orchestrator | Wednesday 18 February 2026 06:15:55 +0000 (0:00:01.125) 0:24:43.964 **** 2026-02-18 06:16:12.727193 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727204 | orchestrator | 2026-02-18 06:16:12.727215 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:16:12.727226 | orchestrator | Wednesday 18 February 2026 06:15:56 +0000 (0:00:01.175) 0:24:45.139 **** 2026-02-18 06:16:12.727237 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727248 | orchestrator | 2026-02-18 06:16:12.727259 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:16:12.727269 | orchestrator | Wednesday 18 February 2026 06:15:57 +0000 (0:00:01.196) 0:24:46.335 **** 2026-02-18 06:16:12.727280 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727291 | orchestrator | 2026-02-18 06:16:12.727390 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:16:12.727449 | orchestrator | Wednesday 18 February 2026 06:15:58 +0000 (0:00:01.184) 0:24:47.519 **** 2026-02-18 06:16:12.727462 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727473 | orchestrator | 2026-02-18 06:16:12.727484 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:16:12.727495 | orchestrator | Wednesday 18 February 2026 06:15:59 +0000 (0:00:01.171) 0:24:48.691 **** 2026-02-18 06:16:12.727506 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727517 | orchestrator | 2026-02-18 06:16:12.727528 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:16:12.727539 | orchestrator | Wednesday 18 February 2026 06:16:00 +0000 (0:00:01.165) 0:24:49.857 **** 2026-02-18 06:16:12.727549 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727560 | orchestrator | 2026-02-18 06:16:12.727571 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:16:12.727582 | orchestrator | Wednesday 18 February 2026 06:16:02 +0000 (0:00:01.191) 0:24:51.049 **** 2026-02-18 06:16:12.727593 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:16:12.727604 | orchestrator | 2026-02-18 06:16:12.727614 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:16:12.727625 | orchestrator | Wednesday 18 February 2026 06:16:03 +0000 (0:00:01.153) 0:24:52.202 **** 2026-02-18 06:16:12.727636 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:16:12.727647 | orchestrator | 2026-02-18 06:16:12.727658 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:16:12.727669 | orchestrator | Wednesday 18 February 2026 06:16:04 +0000 (0:00:01.271) 0:24:53.473 **** 2026-02-18 06:16:12.727680 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-18 06:16:12.727691 | orchestrator | 2026-02-18 06:16:12.727701 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:16:12.727721 | orchestrator | Wednesday 18 February 2026 06:16:05 +0000 (0:00:01.200) 0:24:54.674 **** 2026-02-18 06:16:12.727732 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-18 06:16:12.727743 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-18 06:16:12.727754 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-18 06:16:12.727765 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-18 06:16:12.727775 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-18 06:16:12.727786 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-18 06:16:12.727797 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-18 06:16:12.727808 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:16:12.727819 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:16:12.727829 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:16:12.727840 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:16:12.727851 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:16:12.727862 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:16:12.727873 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:16:12.727883 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-18 06:16:12.727895 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-18 06:16:12.727905 | orchestrator | 2026-02-18 06:16:12.727927 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:17:07.416434 | orchestrator | Wednesday 18 February 2026 06:16:12 +0000 (0:00:06.904) 0:25:01.578 **** 2026-02-18 06:17:07.416576 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416594 | orchestrator | 2026-02-18 06:17:07.416607 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:17:07.416618 | orchestrator | Wednesday 18 February 2026 06:16:13 +0000 (0:00:01.135) 0:25:02.714 **** 2026-02-18 06:17:07.416629 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416640 | orchestrator | 2026-02-18 06:17:07.416652 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:17:07.416680 | orchestrator | Wednesday 18 February 2026 06:16:14 +0000 (0:00:01.156) 0:25:03.870 **** 2026-02-18 06:17:07.416692 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416703 | orchestrator | 2026-02-18 06:17:07.416714 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:17:07.416725 | orchestrator | Wednesday 18 February 2026 06:16:16 +0000 (0:00:01.216) 0:25:05.087 **** 2026-02-18 06:17:07.416736 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416747 | orchestrator | 2026-02-18 06:17:07.416758 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:17:07.416769 | orchestrator | Wednesday 18 February 2026 06:16:17 +0000 (0:00:01.117) 0:25:06.205 **** 2026-02-18 06:17:07.416781 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416792 | orchestrator | 2026-02-18 06:17:07.416803 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:17:07.416814 | orchestrator | Wednesday 18 February 2026 06:16:18 +0000 (0:00:01.107) 0:25:07.312 **** 2026-02-18 06:17:07.416825 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416836 | orchestrator | 2026-02-18 06:17:07.416847 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:17:07.416859 | orchestrator | Wednesday 18 February 2026 06:16:19 +0000 (0:00:01.151) 0:25:08.464 **** 2026-02-18 06:17:07.416870 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416880 | orchestrator | 2026-02-18 06:17:07.416891 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:17:07.416923 | orchestrator | Wednesday 18 February 2026 06:16:20 +0000 (0:00:01.173) 0:25:09.638 **** 2026-02-18 06:17:07.416935 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416946 | orchestrator | 2026-02-18 06:17:07.416957 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:17:07.416968 | orchestrator | Wednesday 18 February 2026 06:16:21 +0000 (0:00:01.171) 0:25:10.810 **** 2026-02-18 06:17:07.416981 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.416994 | orchestrator | 2026-02-18 06:17:07.417006 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:17:07.417019 | orchestrator | Wednesday 18 February 2026 06:16:23 +0000 (0:00:01.124) 0:25:11.934 **** 2026-02-18 06:17:07.417032 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417044 | orchestrator | 2026-02-18 06:17:07.417057 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:17:07.417070 | orchestrator | Wednesday 18 February 2026 06:16:24 +0000 (0:00:01.152) 0:25:13.087 **** 2026-02-18 06:17:07.417082 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417095 | orchestrator | 2026-02-18 06:17:07.417109 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:17:07.417122 | orchestrator | Wednesday 18 February 2026 06:16:25 +0000 (0:00:01.126) 0:25:14.214 **** 2026-02-18 06:17:07.417134 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417144 | orchestrator | 2026-02-18 06:17:07.417155 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:17:07.417166 | orchestrator | Wednesday 18 February 2026 06:16:26 +0000 (0:00:01.121) 0:25:15.335 **** 2026-02-18 06:17:07.417177 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417188 | orchestrator | 2026-02-18 06:17:07.417199 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:17:07.417209 | orchestrator | Wednesday 18 February 2026 06:16:27 +0000 (0:00:01.276) 0:25:16.612 **** 2026-02-18 06:17:07.417220 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417231 | orchestrator | 2026-02-18 06:17:07.417242 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:17:07.417252 | orchestrator | Wednesday 18 February 2026 06:16:28 +0000 (0:00:01.194) 0:25:17.807 **** 2026-02-18 06:17:07.417263 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417274 | orchestrator | 2026-02-18 06:17:07.417285 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:17:07.417295 | orchestrator | Wednesday 18 February 2026 06:16:30 +0000 (0:00:01.264) 0:25:19.071 **** 2026-02-18 06:17:07.417306 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417317 | orchestrator | 2026-02-18 06:17:07.417328 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:17:07.417338 | orchestrator | Wednesday 18 February 2026 06:16:31 +0000 (0:00:01.214) 0:25:20.286 **** 2026-02-18 06:17:07.417349 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417360 | orchestrator | 2026-02-18 06:17:07.417371 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:17:07.417383 | orchestrator | Wednesday 18 February 2026 06:16:32 +0000 (0:00:01.134) 0:25:21.421 **** 2026-02-18 06:17:07.417394 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417405 | orchestrator | 2026-02-18 06:17:07.417416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:17:07.417427 | orchestrator | Wednesday 18 February 2026 06:16:33 +0000 (0:00:01.180) 0:25:22.602 **** 2026-02-18 06:17:07.417438 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417449 | orchestrator | 2026-02-18 06:17:07.417459 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:17:07.417490 | orchestrator | Wednesday 18 February 2026 06:16:34 +0000 (0:00:01.139) 0:25:23.741 **** 2026-02-18 06:17:07.417501 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417512 | orchestrator | 2026-02-18 06:17:07.417547 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:17:07.417559 | orchestrator | Wednesday 18 February 2026 06:16:36 +0000 (0:00:01.155) 0:25:24.896 **** 2026-02-18 06:17:07.417570 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417581 | orchestrator | 2026-02-18 06:17:07.417592 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:17:07.417603 | orchestrator | Wednesday 18 February 2026 06:16:37 +0000 (0:00:01.152) 0:25:26.049 **** 2026-02-18 06:17:07.417614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:17:07.417630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:17:07.417641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:17:07.417652 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417663 | orchestrator | 2026-02-18 06:17:07.417674 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:17:07.417685 | orchestrator | Wednesday 18 February 2026 06:16:39 +0000 (0:00:01.828) 0:25:27.877 **** 2026-02-18 06:17:07.417696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:17:07.417707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:17:07.417718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:17:07.417729 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417740 | orchestrator | 2026-02-18 06:17:07.417751 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:17:07.417762 | orchestrator | Wednesday 18 February 2026 06:16:40 +0000 (0:00:01.845) 0:25:29.723 **** 2026-02-18 06:17:07.417773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:17:07.417784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:17:07.417795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-18 06:17:07.417806 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417817 | orchestrator | 2026-02-18 06:17:07.417828 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:17:07.417839 | orchestrator | Wednesday 18 February 2026 06:16:42 +0000 (0:00:01.845) 0:25:31.569 **** 2026-02-18 06:17:07.417850 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417861 | orchestrator | 2026-02-18 06:17:07.417872 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:17:07.417883 | orchestrator | Wednesday 18 February 2026 06:16:43 +0000 (0:00:01.164) 0:25:32.733 **** 2026-02-18 06:17:07.417894 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-18 06:17:07.417905 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.417916 | orchestrator | 2026-02-18 06:17:07.417927 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:17:07.417938 | orchestrator | Wednesday 18 February 2026 06:16:45 +0000 (0:00:01.268) 0:25:34.002 **** 2026-02-18 06:17:07.417950 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:17:07.417961 | orchestrator | 2026-02-18 06:17:07.417972 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:17:07.417983 | orchestrator | Wednesday 18 February 2026 06:16:46 +0000 (0:00:01.833) 0:25:35.836 **** 2026-02-18 06:17:07.417994 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:17:07.418005 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:17:07.418066 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:17:07.418078 | orchestrator | 2026-02-18 06:17:07.418089 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-18 06:17:07.418100 | orchestrator | Wednesday 18 February 2026 06:16:48 +0000 (0:00:01.806) 0:25:37.643 **** 2026-02-18 06:17:07.418111 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-18 06:17:07.418122 | orchestrator | 2026-02-18 06:17:07.418134 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-18 06:17:07.418151 | orchestrator | Wednesday 18 February 2026 06:16:50 +0000 (0:00:01.498) 0:25:39.142 **** 2026-02-18 06:17:07.418163 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:17:07.418173 | orchestrator | 2026-02-18 06:17:07.418184 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-18 06:17:07.418195 | orchestrator | Wednesday 18 February 2026 06:16:51 +0000 (0:00:01.475) 0:25:40.617 **** 2026-02-18 06:17:07.418206 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:17:07.418217 | orchestrator | 2026-02-18 06:17:07.418228 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-18 06:17:07.418239 | orchestrator | Wednesday 18 February 2026 06:16:52 +0000 (0:00:01.218) 0:25:41.835 **** 2026-02-18 06:17:07.418250 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-18 06:17:07.418261 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-18 06:17:07.418272 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-18 06:17:07.418283 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-18 06:17:07.418294 | orchestrator | 2026-02-18 06:17:07.418305 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-18 06:17:07.418316 | orchestrator | Wednesday 18 February 2026 06:17:00 +0000 (0:00:07.807) 0:25:49.643 **** 2026-02-18 06:17:07.418327 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:17:07.418338 | orchestrator | 2026-02-18 06:17:07.418349 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-18 06:17:07.418359 | orchestrator | Wednesday 18 February 2026 06:17:02 +0000 (0:00:01.271) 0:25:50.915 **** 2026-02-18 06:17:07.418370 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-18 06:17:07.418381 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 06:17:07.418392 | orchestrator | 2026-02-18 06:17:07.418403 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:17:07.418414 | orchestrator | Wednesday 18 February 2026 06:17:05 +0000 (0:00:03.401) 0:25:54.317 **** 2026-02-18 06:17:07.418431 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-18 06:18:05.392741 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-18 06:18:05.392823 | orchestrator | 2026-02-18 06:18:05.392831 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-18 06:18:05.392838 | orchestrator | Wednesday 18 February 2026 06:17:07 +0000 (0:00:01.964) 0:25:56.282 **** 2026-02-18 06:18:05.392842 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:18:05.392846 | orchestrator | 2026-02-18 06:18:05.392851 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-18 06:18:05.392855 | orchestrator | Wednesday 18 February 2026 06:17:08 +0000 (0:00:01.556) 0:25:57.838 **** 2026-02-18 06:18:05.392869 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:18:05.392873 | orchestrator | 2026-02-18 06:18:05.392877 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-18 06:18:05.392881 | orchestrator | Wednesday 18 February 2026 06:17:10 +0000 (0:00:01.141) 0:25:58.979 **** 2026-02-18 06:18:05.392885 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:18:05.392889 | orchestrator | 2026-02-18 06:18:05.392893 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-18 06:18:05.392897 | orchestrator | Wednesday 18 February 2026 06:17:11 +0000 (0:00:01.195) 0:26:00.175 **** 2026-02-18 06:18:05.392900 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-18 06:18:05.392905 | orchestrator | 2026-02-18 06:18:05.392909 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-18 06:18:05.392913 | orchestrator | Wednesday 18 February 2026 06:17:12 +0000 (0:00:01.549) 0:26:01.725 **** 2026-02-18 06:18:05.392917 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:18:05.392920 | orchestrator | 2026-02-18 06:18:05.392924 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-18 06:18:05.392928 | orchestrator | Wednesday 18 February 2026 06:17:14 +0000 (0:00:01.164) 0:26:02.890 **** 2026-02-18 06:18:05.392946 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:18:05.392950 | orchestrator | 2026-02-18 06:18:05.392954 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-18 06:18:05.392957 | orchestrator | Wednesday 18 February 2026 06:17:15 +0000 (0:00:01.161) 0:26:04.051 **** 2026-02-18 06:18:05.392961 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-18 06:18:05.392965 | orchestrator | 2026-02-18 06:18:05.392969 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-18 06:18:05.392973 | orchestrator | Wednesday 18 February 2026 06:17:16 +0000 (0:00:01.561) 0:26:05.613 **** 2026-02-18 06:18:05.392977 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:18:05.392981 | orchestrator | 2026-02-18 06:18:05.392984 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-18 06:18:05.392988 | orchestrator | Wednesday 18 February 2026 06:17:18 +0000 (0:00:02.128) 0:26:07.741 **** 2026-02-18 06:18:05.392992 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:18:05.392996 | orchestrator | 2026-02-18 06:18:05.392999 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-18 06:18:05.393003 | orchestrator | Wednesday 18 February 2026 06:17:20 +0000 (0:00:02.051) 0:26:09.793 **** 2026-02-18 06:18:05.393009 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:18:05.393013 | orchestrator | 2026-02-18 06:18:05.393017 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-18 06:18:05.393020 | orchestrator | Wednesday 18 February 2026 06:17:23 +0000 (0:00:02.477) 0:26:12.270 **** 2026-02-18 06:18:05.393024 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:18:05.393028 | orchestrator | 2026-02-18 06:18:05.393032 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-18 06:18:05.393036 | orchestrator | Wednesday 18 February 2026 06:17:27 +0000 (0:00:03.923) 0:26:16.194 **** 2026-02-18 06:18:05.393039 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:18:05.393043 | orchestrator | 2026-02-18 06:18:05.393047 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-18 06:18:05.393051 | orchestrator | 2026-02-18 06:18:05.393055 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-18 06:18:05.393059 | orchestrator | Wednesday 18 February 2026 06:17:28 +0000 (0:00:01.072) 0:26:17.267 **** 2026-02-18 06:18:05.393062 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:18:05.393066 | orchestrator | 2026-02-18 06:18:05.393070 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-18 06:18:05.393074 | orchestrator | Wednesday 18 February 2026 06:17:41 +0000 (0:00:12.637) 0:26:29.905 **** 2026-02-18 06:18:05.393077 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:18:05.393081 | orchestrator | 2026-02-18 06:18:05.393085 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:18:05.393089 | orchestrator | Wednesday 18 February 2026 06:17:43 +0000 (0:00:02.119) 0:26:32.025 **** 2026-02-18 06:18:05.393092 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-18 06:18:05.393096 | orchestrator | 2026-02-18 06:18:05.393100 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:18:05.393104 | orchestrator | Wednesday 18 February 2026 06:17:44 +0000 (0:00:01.149) 0:26:33.174 **** 2026-02-18 06:18:05.393107 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393111 | orchestrator | 2026-02-18 06:18:05.393115 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:18:05.393119 | orchestrator | Wednesday 18 February 2026 06:17:45 +0000 (0:00:01.505) 0:26:34.680 **** 2026-02-18 06:18:05.393122 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393126 | orchestrator | 2026-02-18 06:18:05.393130 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:18:05.393134 | orchestrator | Wednesday 18 February 2026 06:17:46 +0000 (0:00:01.139) 0:26:35.819 **** 2026-02-18 06:18:05.393142 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393145 | orchestrator | 2026-02-18 06:18:05.393149 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:18:05.393153 | orchestrator | Wednesday 18 February 2026 06:17:48 +0000 (0:00:01.410) 0:26:37.229 **** 2026-02-18 06:18:05.393157 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393161 | orchestrator | 2026-02-18 06:18:05.393174 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:18:05.393178 | orchestrator | Wednesday 18 February 2026 06:17:49 +0000 (0:00:01.239) 0:26:38.469 **** 2026-02-18 06:18:05.393182 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393186 | orchestrator | 2026-02-18 06:18:05.393189 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:18:05.393193 | orchestrator | Wednesday 18 February 2026 06:17:51 +0000 (0:00:01.714) 0:26:40.184 **** 2026-02-18 06:18:05.393197 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393201 | orchestrator | 2026-02-18 06:18:05.393207 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:18:05.393211 | orchestrator | Wednesday 18 February 2026 06:17:52 +0000 (0:00:01.199) 0:26:41.383 **** 2026-02-18 06:18:05.393215 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:05.393219 | orchestrator | 2026-02-18 06:18:05.393223 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:18:05.393227 | orchestrator | Wednesday 18 February 2026 06:17:53 +0000 (0:00:01.137) 0:26:42.520 **** 2026-02-18 06:18:05.393231 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393234 | orchestrator | 2026-02-18 06:18:05.393238 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:18:05.393242 | orchestrator | Wednesday 18 February 2026 06:17:54 +0000 (0:00:01.111) 0:26:43.631 **** 2026-02-18 06:18:05.393246 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:18:05.393250 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:18:05.393254 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:18:05.393258 | orchestrator | 2026-02-18 06:18:05.393261 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:18:05.393265 | orchestrator | Wednesday 18 February 2026 06:17:56 +0000 (0:00:01.787) 0:26:45.419 **** 2026-02-18 06:18:05.393269 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:05.393273 | orchestrator | 2026-02-18 06:18:05.393276 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:18:05.393280 | orchestrator | Wednesday 18 February 2026 06:17:57 +0000 (0:00:01.254) 0:26:46.674 **** 2026-02-18 06:18:05.393284 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:18:05.393288 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:18:05.393292 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:18:05.393295 | orchestrator | 2026-02-18 06:18:05.393299 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:18:05.393303 | orchestrator | Wednesday 18 February 2026 06:18:00 +0000 (0:00:02.968) 0:26:49.642 **** 2026-02-18 06:18:05.393307 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 06:18:05.393312 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 06:18:05.393316 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 06:18:05.393321 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:05.393325 | orchestrator | 2026-02-18 06:18:05.393330 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:18:05.393334 | orchestrator | Wednesday 18 February 2026 06:18:02 +0000 (0:00:01.445) 0:26:51.088 **** 2026-02-18 06:18:05.393340 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:18:05.393350 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:18:05.393355 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:18:05.393359 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:05.393364 | orchestrator | 2026-02-18 06:18:05.393368 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:18:05.393372 | orchestrator | Wednesday 18 February 2026 06:18:04 +0000 (0:00:01.951) 0:26:53.039 **** 2026-02-18 06:18:05.393378 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:05.393386 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:05.393394 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:25.207281 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.207402 | orchestrator | 2026-02-18 06:18:25.207419 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:18:25.207432 | orchestrator | Wednesday 18 February 2026 06:18:05 +0000 (0:00:01.219) 0:26:54.258 **** 2026-02-18 06:18:25.207448 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:17:58.319930', 'end': '2026-02-18 06:17:58.373276', 'delta': '0:00:00.053346', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:18:25.207463 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:17:58.950836', 'end': '2026-02-18 06:17:58.996836', 'delta': '0:00:00.046000', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:18:25.207497 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:17:59.544166', 'end': '2026-02-18 06:17:59.589102', 'delta': '0:00:00.044936', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:18:25.207509 | orchestrator | 2026-02-18 06:18:25.207520 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:18:25.207531 | orchestrator | Wednesday 18 February 2026 06:18:06 +0000 (0:00:01.201) 0:26:55.460 **** 2026-02-18 06:18:25.207615 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:25.207628 | orchestrator | 2026-02-18 06:18:25.207639 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:18:25.207650 | orchestrator | Wednesday 18 February 2026 06:18:07 +0000 (0:00:01.267) 0:26:56.727 **** 2026-02-18 06:18:25.207661 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.207672 | orchestrator | 2026-02-18 06:18:25.207683 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:18:25.207693 | orchestrator | Wednesday 18 February 2026 06:18:09 +0000 (0:00:01.327) 0:26:58.055 **** 2026-02-18 06:18:25.207718 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:25.207730 | orchestrator | 2026-02-18 06:18:25.207744 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:18:25.207763 | orchestrator | Wednesday 18 February 2026 06:18:10 +0000 (0:00:01.165) 0:26:59.221 **** 2026-02-18 06:18:25.207780 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:18:25.207797 | orchestrator | 2026-02-18 06:18:25.207815 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:18:25.207834 | orchestrator | Wednesday 18 February 2026 06:18:12 +0000 (0:00:02.029) 0:27:01.250 **** 2026-02-18 06:18:25.207853 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:18:25.207871 | orchestrator | 2026-02-18 06:18:25.207889 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:18:25.207905 | orchestrator | Wednesday 18 February 2026 06:18:13 +0000 (0:00:01.183) 0:27:02.434 **** 2026-02-18 06:18:25.207923 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.207942 | orchestrator | 2026-02-18 06:18:25.207961 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:18:25.207980 | orchestrator | Wednesday 18 February 2026 06:18:14 +0000 (0:00:01.123) 0:27:03.557 **** 2026-02-18 06:18:25.207998 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208016 | orchestrator | 2026-02-18 06:18:25.208035 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:18:25.208056 | orchestrator | Wednesday 18 February 2026 06:18:15 +0000 (0:00:01.228) 0:27:04.786 **** 2026-02-18 06:18:25.208075 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208093 | orchestrator | 2026-02-18 06:18:25.208138 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:18:25.208152 | orchestrator | Wednesday 18 February 2026 06:18:17 +0000 (0:00:01.180) 0:27:05.966 **** 2026-02-18 06:18:25.208163 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208174 | orchestrator | 2026-02-18 06:18:25.208184 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:18:25.208195 | orchestrator | Wednesday 18 February 2026 06:18:18 +0000 (0:00:01.134) 0:27:07.100 **** 2026-02-18 06:18:25.208206 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208216 | orchestrator | 2026-02-18 06:18:25.208239 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:18:25.208250 | orchestrator | Wednesday 18 February 2026 06:18:19 +0000 (0:00:01.142) 0:27:08.243 **** 2026-02-18 06:18:25.208261 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208271 | orchestrator | 2026-02-18 06:18:25.208282 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:18:25.208293 | orchestrator | Wednesday 18 February 2026 06:18:20 +0000 (0:00:01.136) 0:27:09.380 **** 2026-02-18 06:18:25.208303 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208314 | orchestrator | 2026-02-18 06:18:25.208324 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:18:25.208335 | orchestrator | Wednesday 18 February 2026 06:18:21 +0000 (0:00:01.138) 0:27:10.518 **** 2026-02-18 06:18:25.208345 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208356 | orchestrator | 2026-02-18 06:18:25.208367 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:18:25.208378 | orchestrator | Wednesday 18 February 2026 06:18:22 +0000 (0:00:01.129) 0:27:11.648 **** 2026-02-18 06:18:25.208389 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:25.208399 | orchestrator | 2026-02-18 06:18:25.208410 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:18:25.208420 | orchestrator | Wednesday 18 February 2026 06:18:23 +0000 (0:00:01.132) 0:27:12.780 **** 2026-02-18 06:18:25.208432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:25.208447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:25.208459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:25.208471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:18:25.208484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:25.208496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:25.208528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:26.474096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '907e2eef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:18:26.474214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:26.474242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:18:26.474263 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:18:26.474283 | orchestrator | 2026-02-18 06:18:26.474305 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:18:26.474325 | orchestrator | Wednesday 18 February 2026 06:18:25 +0000 (0:00:01.283) 0:27:14.064 **** 2026-02-18 06:18:26.474392 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:26.474429 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:26.474442 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:26.474455 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:26.474467 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:26.474510 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:26.474522 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:18:26.474588 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '907e2eef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_907e2eef-6213-4277-a236-2ae103a400c6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:19:01.440192 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:19:01.440303 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:19:01.440341 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.440355 | orchestrator | 2026-02-18 06:19:01.440366 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:19:01.440378 | orchestrator | Wednesday 18 February 2026 06:18:26 +0000 (0:00:01.272) 0:27:15.336 **** 2026-02-18 06:19:01.440387 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.440398 | orchestrator | 2026-02-18 06:19:01.440408 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:19:01.440418 | orchestrator | Wednesday 18 February 2026 06:18:27 +0000 (0:00:01.520) 0:27:16.857 **** 2026-02-18 06:19:01.440428 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.440438 | orchestrator | 2026-02-18 06:19:01.440448 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:19:01.440458 | orchestrator | Wednesday 18 February 2026 06:18:29 +0000 (0:00:01.129) 0:27:17.987 **** 2026-02-18 06:19:01.440468 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.440477 | orchestrator | 2026-02-18 06:19:01.440500 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:19:01.440510 | orchestrator | Wednesday 18 February 2026 06:18:30 +0000 (0:00:01.497) 0:27:19.484 **** 2026-02-18 06:19:01.440520 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.440530 | orchestrator | 2026-02-18 06:19:01.440540 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:19:01.440550 | orchestrator | Wednesday 18 February 2026 06:18:31 +0000 (0:00:01.187) 0:27:20.672 **** 2026-02-18 06:19:01.440559 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.440615 | orchestrator | 2026-02-18 06:19:01.440626 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:19:01.440636 | orchestrator | Wednesday 18 February 2026 06:18:33 +0000 (0:00:01.206) 0:27:21.879 **** 2026-02-18 06:19:01.440646 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.440656 | orchestrator | 2026-02-18 06:19:01.440666 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:19:01.440675 | orchestrator | Wednesday 18 February 2026 06:18:34 +0000 (0:00:01.150) 0:27:23.029 **** 2026-02-18 06:19:01.440685 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-18 06:19:01.440695 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:19:01.440705 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-18 06:19:01.440715 | orchestrator | 2026-02-18 06:19:01.440726 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:19:01.440737 | orchestrator | Wednesday 18 February 2026 06:18:35 +0000 (0:00:01.758) 0:27:24.788 **** 2026-02-18 06:19:01.440749 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-18 06:19:01.440761 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-18 06:19:01.440772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-18 06:19:01.440784 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.440795 | orchestrator | 2026-02-18 06:19:01.440807 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:19:01.440819 | orchestrator | Wednesday 18 February 2026 06:18:37 +0000 (0:00:01.221) 0:27:26.009 **** 2026-02-18 06:19:01.440830 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.440841 | orchestrator | 2026-02-18 06:19:01.440852 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:19:01.440864 | orchestrator | Wednesday 18 February 2026 06:18:38 +0000 (0:00:01.146) 0:27:27.156 **** 2026-02-18 06:19:01.440875 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:19:01.440888 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:19:01.440899 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:19:01.440918 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:19:01.440930 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:19:01.440941 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:19:01.440968 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:19:01.440981 | orchestrator | 2026-02-18 06:19:01.440993 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:19:01.441005 | orchestrator | Wednesday 18 February 2026 06:18:40 +0000 (0:00:02.213) 0:27:29.369 **** 2026-02-18 06:19:01.441016 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:19:01.441026 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:19:01.441037 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:19:01.441047 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:19:01.441058 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:19:01.441068 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:19:01.441078 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:19:01.441089 | orchestrator | 2026-02-18 06:19:01.441100 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:19:01.441110 | orchestrator | Wednesday 18 February 2026 06:18:42 +0000 (0:00:02.343) 0:27:31.713 **** 2026-02-18 06:19:01.441120 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-18 06:19:01.441132 | orchestrator | 2026-02-18 06:19:01.441142 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:19:01.441153 | orchestrator | Wednesday 18 February 2026 06:18:44 +0000 (0:00:01.232) 0:27:32.945 **** 2026-02-18 06:19:01.441163 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-18 06:19:01.441173 | orchestrator | 2026-02-18 06:19:01.441184 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:19:01.441194 | orchestrator | Wednesday 18 February 2026 06:18:45 +0000 (0:00:01.156) 0:27:34.102 **** 2026-02-18 06:19:01.441205 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.441215 | orchestrator | 2026-02-18 06:19:01.441226 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:19:01.441236 | orchestrator | Wednesday 18 February 2026 06:18:46 +0000 (0:00:01.615) 0:27:35.718 **** 2026-02-18 06:19:01.441246 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441257 | orchestrator | 2026-02-18 06:19:01.441267 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:19:01.441278 | orchestrator | Wednesday 18 February 2026 06:18:47 +0000 (0:00:01.139) 0:27:36.858 **** 2026-02-18 06:19:01.441293 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441304 | orchestrator | 2026-02-18 06:19:01.441314 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:19:01.441324 | orchestrator | Wednesday 18 February 2026 06:18:49 +0000 (0:00:01.134) 0:27:37.992 **** 2026-02-18 06:19:01.441334 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441345 | orchestrator | 2026-02-18 06:19:01.441355 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:19:01.441365 | orchestrator | Wednesday 18 February 2026 06:18:50 +0000 (0:00:01.117) 0:27:39.110 **** 2026-02-18 06:19:01.441376 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.441386 | orchestrator | 2026-02-18 06:19:01.441397 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:19:01.441407 | orchestrator | Wednesday 18 February 2026 06:18:51 +0000 (0:00:01.589) 0:27:40.699 **** 2026-02-18 06:19:01.441424 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441439 | orchestrator | 2026-02-18 06:19:01.441455 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:19:01.441472 | orchestrator | Wednesday 18 February 2026 06:18:52 +0000 (0:00:01.154) 0:27:41.854 **** 2026-02-18 06:19:01.441486 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441502 | orchestrator | 2026-02-18 06:19:01.441517 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:19:01.441533 | orchestrator | Wednesday 18 February 2026 06:18:54 +0000 (0:00:01.119) 0:27:42.973 **** 2026-02-18 06:19:01.441550 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.441563 | orchestrator | 2026-02-18 06:19:01.441619 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:19:01.441629 | orchestrator | Wednesday 18 February 2026 06:18:55 +0000 (0:00:01.734) 0:27:44.708 **** 2026-02-18 06:19:01.441639 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.441649 | orchestrator | 2026-02-18 06:19:01.441659 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:19:01.441668 | orchestrator | Wednesday 18 February 2026 06:18:57 +0000 (0:00:01.599) 0:27:46.308 **** 2026-02-18 06:19:01.441678 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441687 | orchestrator | 2026-02-18 06:19:01.441697 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:19:01.441707 | orchestrator | Wednesday 18 February 2026 06:18:58 +0000 (0:00:00.858) 0:27:47.167 **** 2026-02-18 06:19:01.441716 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:01.441726 | orchestrator | 2026-02-18 06:19:01.441735 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:19:01.441745 | orchestrator | Wednesday 18 February 2026 06:18:59 +0000 (0:00:00.799) 0:27:47.967 **** 2026-02-18 06:19:01.441755 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441764 | orchestrator | 2026-02-18 06:19:01.441774 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:19:01.441784 | orchestrator | Wednesday 18 February 2026 06:18:59 +0000 (0:00:00.756) 0:27:48.723 **** 2026-02-18 06:19:01.441810 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:01.441847 | orchestrator | 2026-02-18 06:19:01.441864 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:19:01.441880 | orchestrator | Wednesday 18 February 2026 06:19:00 +0000 (0:00:00.804) 0:27:49.528 **** 2026-02-18 06:19:01.441905 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526209 | orchestrator | 2026-02-18 06:19:42.526326 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:19:42.526344 | orchestrator | Wednesday 18 February 2026 06:19:01 +0000 (0:00:00.776) 0:27:50.305 **** 2026-02-18 06:19:42.526356 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526369 | orchestrator | 2026-02-18 06:19:42.526380 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:19:42.526392 | orchestrator | Wednesday 18 February 2026 06:19:02 +0000 (0:00:00.813) 0:27:51.118 **** 2026-02-18 06:19:42.526403 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526414 | orchestrator | 2026-02-18 06:19:42.526426 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:19:42.526437 | orchestrator | Wednesday 18 February 2026 06:19:03 +0000 (0:00:00.876) 0:27:51.995 **** 2026-02-18 06:19:42.526448 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.526460 | orchestrator | 2026-02-18 06:19:42.526471 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:19:42.526482 | orchestrator | Wednesday 18 February 2026 06:19:03 +0000 (0:00:00.792) 0:27:52.787 **** 2026-02-18 06:19:42.526493 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.526504 | orchestrator | 2026-02-18 06:19:42.526515 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:19:42.526527 | orchestrator | Wednesday 18 February 2026 06:19:04 +0000 (0:00:00.825) 0:27:53.613 **** 2026-02-18 06:19:42.526566 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.526579 | orchestrator | 2026-02-18 06:19:42.526591 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:19:42.526602 | orchestrator | Wednesday 18 February 2026 06:19:05 +0000 (0:00:00.880) 0:27:54.493 **** 2026-02-18 06:19:42.526642 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526653 | orchestrator | 2026-02-18 06:19:42.526664 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:19:42.526675 | orchestrator | Wednesday 18 February 2026 06:19:06 +0000 (0:00:00.860) 0:27:55.353 **** 2026-02-18 06:19:42.526686 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526697 | orchestrator | 2026-02-18 06:19:42.526708 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:19:42.526719 | orchestrator | Wednesday 18 February 2026 06:19:07 +0000 (0:00:00.777) 0:27:56.131 **** 2026-02-18 06:19:42.526730 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526743 | orchestrator | 2026-02-18 06:19:42.526756 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:19:42.526769 | orchestrator | Wednesday 18 February 2026 06:19:08 +0000 (0:00:00.815) 0:27:56.946 **** 2026-02-18 06:19:42.526781 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526794 | orchestrator | 2026-02-18 06:19:42.526806 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:19:42.526833 | orchestrator | Wednesday 18 February 2026 06:19:08 +0000 (0:00:00.748) 0:27:57.694 **** 2026-02-18 06:19:42.526845 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526858 | orchestrator | 2026-02-18 06:19:42.526870 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:19:42.526883 | orchestrator | Wednesday 18 February 2026 06:19:09 +0000 (0:00:00.801) 0:27:58.496 **** 2026-02-18 06:19:42.526895 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526908 | orchestrator | 2026-02-18 06:19:42.526921 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:19:42.526934 | orchestrator | Wednesday 18 February 2026 06:19:10 +0000 (0:00:00.799) 0:27:59.296 **** 2026-02-18 06:19:42.526946 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.526958 | orchestrator | 2026-02-18 06:19:42.526971 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:19:42.526985 | orchestrator | Wednesday 18 February 2026 06:19:11 +0000 (0:00:00.792) 0:28:00.089 **** 2026-02-18 06:19:42.526997 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527010 | orchestrator | 2026-02-18 06:19:42.527022 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:19:42.527033 | orchestrator | Wednesday 18 February 2026 06:19:12 +0000 (0:00:00.862) 0:28:00.951 **** 2026-02-18 06:19:42.527043 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527054 | orchestrator | 2026-02-18 06:19:42.527065 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:19:42.527076 | orchestrator | Wednesday 18 February 2026 06:19:12 +0000 (0:00:00.765) 0:28:01.717 **** 2026-02-18 06:19:42.527087 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527098 | orchestrator | 2026-02-18 06:19:42.527109 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:19:42.527120 | orchestrator | Wednesday 18 February 2026 06:19:13 +0000 (0:00:00.812) 0:28:02.530 **** 2026-02-18 06:19:42.527131 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527142 | orchestrator | 2026-02-18 06:19:42.527152 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:19:42.527164 | orchestrator | Wednesday 18 February 2026 06:19:14 +0000 (0:00:00.810) 0:28:03.340 **** 2026-02-18 06:19:42.527174 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527186 | orchestrator | 2026-02-18 06:19:42.527197 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:19:42.527207 | orchestrator | Wednesday 18 February 2026 06:19:15 +0000 (0:00:00.776) 0:28:04.117 **** 2026-02-18 06:19:42.527227 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.527238 | orchestrator | 2026-02-18 06:19:42.527249 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:19:42.527260 | orchestrator | Wednesday 18 February 2026 06:19:16 +0000 (0:00:01.559) 0:28:05.676 **** 2026-02-18 06:19:42.527271 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.527282 | orchestrator | 2026-02-18 06:19:42.527293 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:19:42.527304 | orchestrator | Wednesday 18 February 2026 06:19:18 +0000 (0:00:02.046) 0:28:07.722 **** 2026-02-18 06:19:42.527315 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-18 06:19:42.527327 | orchestrator | 2026-02-18 06:19:42.527355 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:19:42.527367 | orchestrator | Wednesday 18 February 2026 06:19:20 +0000 (0:00:01.270) 0:28:08.993 **** 2026-02-18 06:19:42.527378 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527389 | orchestrator | 2026-02-18 06:19:42.527400 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:19:42.527411 | orchestrator | Wednesday 18 February 2026 06:19:21 +0000 (0:00:01.135) 0:28:10.128 **** 2026-02-18 06:19:42.527421 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527432 | orchestrator | 2026-02-18 06:19:42.527443 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:19:42.527454 | orchestrator | Wednesday 18 February 2026 06:19:22 +0000 (0:00:01.159) 0:28:11.288 **** 2026-02-18 06:19:42.527464 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:19:42.527475 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:19:42.527486 | orchestrator | 2026-02-18 06:19:42.527497 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:19:42.527507 | orchestrator | Wednesday 18 February 2026 06:19:24 +0000 (0:00:01.961) 0:28:13.250 **** 2026-02-18 06:19:42.527518 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.527529 | orchestrator | 2026-02-18 06:19:42.527539 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:19:42.527550 | orchestrator | Wednesday 18 February 2026 06:19:25 +0000 (0:00:01.539) 0:28:14.790 **** 2026-02-18 06:19:42.527560 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527571 | orchestrator | 2026-02-18 06:19:42.527582 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:19:42.527592 | orchestrator | Wednesday 18 February 2026 06:19:27 +0000 (0:00:01.152) 0:28:15.943 **** 2026-02-18 06:19:42.527603 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527675 | orchestrator | 2026-02-18 06:19:42.527686 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:19:42.527698 | orchestrator | Wednesday 18 February 2026 06:19:27 +0000 (0:00:00.784) 0:28:16.727 **** 2026-02-18 06:19:42.527716 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.527734 | orchestrator | 2026-02-18 06:19:42.527758 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:19:42.527786 | orchestrator | Wednesday 18 February 2026 06:19:28 +0000 (0:00:00.773) 0:28:17.500 **** 2026-02-18 06:19:42.527803 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-18 06:19:42.527821 | orchestrator | 2026-02-18 06:19:42.527839 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:19:42.527867 | orchestrator | Wednesday 18 February 2026 06:19:29 +0000 (0:00:01.194) 0:28:18.695 **** 2026-02-18 06:19:42.527885 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.527903 | orchestrator | 2026-02-18 06:19:42.527921 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:19:42.527940 | orchestrator | Wednesday 18 February 2026 06:19:31 +0000 (0:00:01.791) 0:28:20.487 **** 2026-02-18 06:19:42.527971 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:19:42.527990 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:19:42.528010 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:19:42.528029 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.528047 | orchestrator | 2026-02-18 06:19:42.528064 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:19:42.528081 | orchestrator | Wednesday 18 February 2026 06:19:32 +0000 (0:00:01.134) 0:28:21.621 **** 2026-02-18 06:19:42.528099 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.528117 | orchestrator | 2026-02-18 06:19:42.528134 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:19:42.528152 | orchestrator | Wednesday 18 February 2026 06:19:33 +0000 (0:00:01.116) 0:28:22.738 **** 2026-02-18 06:19:42.528171 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.528190 | orchestrator | 2026-02-18 06:19:42.528202 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:19:42.528213 | orchestrator | Wednesday 18 February 2026 06:19:35 +0000 (0:00:01.197) 0:28:23.935 **** 2026-02-18 06:19:42.528224 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.528235 | orchestrator | 2026-02-18 06:19:42.528245 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:19:42.528256 | orchestrator | Wednesday 18 February 2026 06:19:36 +0000 (0:00:01.148) 0:28:25.084 **** 2026-02-18 06:19:42.528267 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.528277 | orchestrator | 2026-02-18 06:19:42.528288 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:19:42.528299 | orchestrator | Wednesday 18 February 2026 06:19:37 +0000 (0:00:01.248) 0:28:26.333 **** 2026-02-18 06:19:42.528309 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:19:42.528320 | orchestrator | 2026-02-18 06:19:42.528331 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:19:42.528342 | orchestrator | Wednesday 18 February 2026 06:19:38 +0000 (0:00:00.801) 0:28:27.135 **** 2026-02-18 06:19:42.528352 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.528363 | orchestrator | 2026-02-18 06:19:42.528374 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:19:42.528385 | orchestrator | Wednesday 18 February 2026 06:19:40 +0000 (0:00:02.251) 0:28:29.386 **** 2026-02-18 06:19:42.528396 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:19:42.528406 | orchestrator | 2026-02-18 06:19:42.528417 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:19:42.528428 | orchestrator | Wednesday 18 February 2026 06:19:41 +0000 (0:00:00.792) 0:28:30.179 **** 2026-02-18 06:19:42.528439 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-18 06:19:42.528450 | orchestrator | 2026-02-18 06:19:42.528473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:20:19.898548 | orchestrator | Wednesday 18 February 2026 06:19:42 +0000 (0:00:01.211) 0:28:31.390 **** 2026-02-18 06:20:19.898726 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.898758 | orchestrator | 2026-02-18 06:20:19.898777 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:20:19.898794 | orchestrator | Wednesday 18 February 2026 06:19:43 +0000 (0:00:01.177) 0:28:32.568 **** 2026-02-18 06:20:19.898812 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.898828 | orchestrator | 2026-02-18 06:20:19.898845 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:20:19.898861 | orchestrator | Wednesday 18 February 2026 06:19:44 +0000 (0:00:01.158) 0:28:33.727 **** 2026-02-18 06:20:19.898876 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.898892 | orchestrator | 2026-02-18 06:20:19.898908 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:20:19.898956 | orchestrator | Wednesday 18 February 2026 06:19:45 +0000 (0:00:01.125) 0:28:34.853 **** 2026-02-18 06:20:19.898974 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.898989 | orchestrator | 2026-02-18 06:20:19.899005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:20:19.899020 | orchestrator | Wednesday 18 February 2026 06:19:47 +0000 (0:00:01.154) 0:28:36.007 **** 2026-02-18 06:20:19.899037 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.899052 | orchestrator | 2026-02-18 06:20:19.899070 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:20:19.899087 | orchestrator | Wednesday 18 February 2026 06:19:48 +0000 (0:00:01.137) 0:28:37.145 **** 2026-02-18 06:20:19.899104 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.899121 | orchestrator | 2026-02-18 06:20:19.899138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:20:19.899154 | orchestrator | Wednesday 18 February 2026 06:19:49 +0000 (0:00:01.201) 0:28:38.347 **** 2026-02-18 06:20:19.899171 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.899187 | orchestrator | 2026-02-18 06:20:19.899203 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:20:19.899220 | orchestrator | Wednesday 18 February 2026 06:19:50 +0000 (0:00:01.175) 0:28:39.523 **** 2026-02-18 06:20:19.899237 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.899253 | orchestrator | 2026-02-18 06:20:19.899270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:20:19.899286 | orchestrator | Wednesday 18 February 2026 06:19:51 +0000 (0:00:01.162) 0:28:40.685 **** 2026-02-18 06:20:19.899301 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:20:19.899318 | orchestrator | 2026-02-18 06:20:19.899335 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:20:19.899373 | orchestrator | Wednesday 18 February 2026 06:19:52 +0000 (0:00:00.830) 0:28:41.516 **** 2026-02-18 06:20:19.899391 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-18 06:20:19.899408 | orchestrator | 2026-02-18 06:20:19.899425 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:20:19.899440 | orchestrator | Wednesday 18 February 2026 06:19:53 +0000 (0:00:01.112) 0:28:42.628 **** 2026-02-18 06:20:19.899455 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-18 06:20:19.899471 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-18 06:20:19.899486 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-18 06:20:19.899502 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-18 06:20:19.899518 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-18 06:20:19.899535 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-18 06:20:19.899553 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-18 06:20:19.899569 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:20:19.899584 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:20:19.899601 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:20:19.899618 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:20:19.899635 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:20:19.899681 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:20:19.899699 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:20:19.899715 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-18 06:20:19.899729 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-18 06:20:19.899744 | orchestrator | 2026-02-18 06:20:19.899761 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:20:19.899777 | orchestrator | Wednesday 18 February 2026 06:20:00 +0000 (0:00:06.791) 0:28:49.420 **** 2026-02-18 06:20:19.899814 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.899830 | orchestrator | 2026-02-18 06:20:19.899846 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:20:19.899858 | orchestrator | Wednesday 18 February 2026 06:20:01 +0000 (0:00:00.755) 0:28:50.176 **** 2026-02-18 06:20:19.899872 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.899887 | orchestrator | 2026-02-18 06:20:19.899903 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:20:19.899917 | orchestrator | Wednesday 18 February 2026 06:20:02 +0000 (0:00:00.808) 0:28:50.984 **** 2026-02-18 06:20:19.899932 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.899946 | orchestrator | 2026-02-18 06:20:19.899960 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:20:19.899975 | orchestrator | Wednesday 18 February 2026 06:20:02 +0000 (0:00:00.818) 0:28:51.802 **** 2026-02-18 06:20:19.899990 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900006 | orchestrator | 2026-02-18 06:20:19.900021 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:20:19.900064 | orchestrator | Wednesday 18 February 2026 06:20:03 +0000 (0:00:00.782) 0:28:52.585 **** 2026-02-18 06:20:19.900081 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900097 | orchestrator | 2026-02-18 06:20:19.900113 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:20:19.900125 | orchestrator | Wednesday 18 February 2026 06:20:04 +0000 (0:00:00.759) 0:28:53.345 **** 2026-02-18 06:20:19.900134 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900144 | orchestrator | 2026-02-18 06:20:19.900154 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:20:19.900170 | orchestrator | Wednesday 18 February 2026 06:20:05 +0000 (0:00:00.930) 0:28:54.276 **** 2026-02-18 06:20:19.900186 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900201 | orchestrator | 2026-02-18 06:20:19.900218 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:20:19.900232 | orchestrator | Wednesday 18 February 2026 06:20:06 +0000 (0:00:00.772) 0:28:55.049 **** 2026-02-18 06:20:19.900246 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900259 | orchestrator | 2026-02-18 06:20:19.900272 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:20:19.900286 | orchestrator | Wednesday 18 February 2026 06:20:06 +0000 (0:00:00.784) 0:28:55.834 **** 2026-02-18 06:20:19.900299 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900313 | orchestrator | 2026-02-18 06:20:19.900326 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:20:19.900341 | orchestrator | Wednesday 18 February 2026 06:20:07 +0000 (0:00:00.815) 0:28:56.649 **** 2026-02-18 06:20:19.900351 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900364 | orchestrator | 2026-02-18 06:20:19.900376 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:20:19.900389 | orchestrator | Wednesday 18 February 2026 06:20:08 +0000 (0:00:00.833) 0:28:57.483 **** 2026-02-18 06:20:19.900403 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900411 | orchestrator | 2026-02-18 06:20:19.900419 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:20:19.900426 | orchestrator | Wednesday 18 February 2026 06:20:09 +0000 (0:00:00.827) 0:28:58.310 **** 2026-02-18 06:20:19.900434 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900442 | orchestrator | 2026-02-18 06:20:19.900449 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:20:19.900457 | orchestrator | Wednesday 18 February 2026 06:20:10 +0000 (0:00:00.800) 0:28:59.111 **** 2026-02-18 06:20:19.900465 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900473 | orchestrator | 2026-02-18 06:20:19.900489 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:20:19.900507 | orchestrator | Wednesday 18 February 2026 06:20:11 +0000 (0:00:00.884) 0:28:59.996 **** 2026-02-18 06:20:19.900515 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900523 | orchestrator | 2026-02-18 06:20:19.900531 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:20:19.900539 | orchestrator | Wednesday 18 February 2026 06:20:11 +0000 (0:00:00.797) 0:29:00.794 **** 2026-02-18 06:20:19.900547 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900555 | orchestrator | 2026-02-18 06:20:19.900563 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:20:19.900571 | orchestrator | Wednesday 18 February 2026 06:20:12 +0000 (0:00:00.895) 0:29:01.689 **** 2026-02-18 06:20:19.900579 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900592 | orchestrator | 2026-02-18 06:20:19.900606 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:20:19.900620 | orchestrator | Wednesday 18 February 2026 06:20:13 +0000 (0:00:00.804) 0:29:02.493 **** 2026-02-18 06:20:19.900633 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900669 | orchestrator | 2026-02-18 06:20:19.900682 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:20:19.900698 | orchestrator | Wednesday 18 February 2026 06:20:14 +0000 (0:00:00.791) 0:29:03.285 **** 2026-02-18 06:20:19.900710 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900721 | orchestrator | 2026-02-18 06:20:19.900733 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:20:19.900746 | orchestrator | Wednesday 18 February 2026 06:20:15 +0000 (0:00:00.872) 0:29:04.158 **** 2026-02-18 06:20:19.900758 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900772 | orchestrator | 2026-02-18 06:20:19.900786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:20:19.900799 | orchestrator | Wednesday 18 February 2026 06:20:16 +0000 (0:00:00.830) 0:29:04.988 **** 2026-02-18 06:20:19.900812 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900825 | orchestrator | 2026-02-18 06:20:19.900839 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:20:19.900852 | orchestrator | Wednesday 18 February 2026 06:20:16 +0000 (0:00:00.811) 0:29:05.800 **** 2026-02-18 06:20:19.900866 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900879 | orchestrator | 2026-02-18 06:20:19.900893 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:20:19.900906 | orchestrator | Wednesday 18 February 2026 06:20:17 +0000 (0:00:00.818) 0:29:06.619 **** 2026-02-18 06:20:19.900920 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:20:19.900933 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:20:19.900947 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:20:19.900960 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:20:19.900974 | orchestrator | 2026-02-18 06:20:19.900987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:20:19.901001 | orchestrator | Wednesday 18 February 2026 06:20:18 +0000 (0:00:01.065) 0:29:07.684 **** 2026-02-18 06:20:19.901015 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:20:19.901040 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:21:17.499813 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:21:17.499919 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.499933 | orchestrator | 2026-02-18 06:21:17.499945 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:21:17.499956 | orchestrator | Wednesday 18 February 2026 06:20:19 +0000 (0:00:01.079) 0:29:08.764 **** 2026-02-18 06:21:17.499966 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-18 06:21:17.499977 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-18 06:21:17.500010 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-18 06:21:17.500020 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.500030 | orchestrator | 2026-02-18 06:21:17.500040 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:21:17.500049 | orchestrator | Wednesday 18 February 2026 06:20:20 +0000 (0:00:01.089) 0:29:09.853 **** 2026-02-18 06:21:17.500059 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.500069 | orchestrator | 2026-02-18 06:21:17.500079 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:21:17.500088 | orchestrator | Wednesday 18 February 2026 06:20:21 +0000 (0:00:00.798) 0:29:10.652 **** 2026-02-18 06:21:17.500098 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-18 06:21:17.500108 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.500117 | orchestrator | 2026-02-18 06:21:17.500127 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:21:17.500137 | orchestrator | Wednesday 18 February 2026 06:20:22 +0000 (0:00:00.945) 0:29:11.598 **** 2026-02-18 06:21:17.500146 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:21:17.500156 | orchestrator | 2026-02-18 06:21:17.500166 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:21:17.500175 | orchestrator | Wednesday 18 February 2026 06:20:24 +0000 (0:00:01.439) 0:29:13.038 **** 2026-02-18 06:21:17.500185 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:21:17.500196 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-18 06:21:17.500205 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:21:17.500215 | orchestrator | 2026-02-18 06:21:17.500225 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-18 06:21:17.500234 | orchestrator | Wednesday 18 February 2026 06:20:25 +0000 (0:00:01.704) 0:29:14.743 **** 2026-02-18 06:21:17.500258 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-18 06:21:17.500268 | orchestrator | 2026-02-18 06:21:17.500277 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-18 06:21:17.500288 | orchestrator | Wednesday 18 February 2026 06:20:26 +0000 (0:00:01.105) 0:29:15.848 **** 2026-02-18 06:21:17.500299 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:21:17.500310 | orchestrator | 2026-02-18 06:21:17.500321 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-18 06:21:17.500332 | orchestrator | Wednesday 18 February 2026 06:20:28 +0000 (0:00:01.509) 0:29:17.358 **** 2026-02-18 06:21:17.500343 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.500354 | orchestrator | 2026-02-18 06:21:17.500364 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-18 06:21:17.500376 | orchestrator | Wednesday 18 February 2026 06:20:29 +0000 (0:00:01.111) 0:29:18.469 **** 2026-02-18 06:21:17.500387 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:21:17.500398 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:21:17.500409 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:21:17.500419 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-18 06:21:17.500428 | orchestrator | 2026-02-18 06:21:17.500438 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-18 06:21:17.500447 | orchestrator | Wednesday 18 February 2026 06:20:36 +0000 (0:00:06.893) 0:29:25.363 **** 2026-02-18 06:21:17.500457 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:21:17.500466 | orchestrator | 2026-02-18 06:21:17.500476 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-18 06:21:17.500485 | orchestrator | Wednesday 18 February 2026 06:20:37 +0000 (0:00:01.168) 0:29:26.531 **** 2026-02-18 06:21:17.500495 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-18 06:21:17.500511 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-18 06:21:17.500521 | orchestrator | 2026-02-18 06:21:17.500531 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:21:17.500540 | orchestrator | Wednesday 18 February 2026 06:20:40 +0000 (0:00:03.118) 0:29:29.650 **** 2026-02-18 06:21:17.500550 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-18 06:21:17.500559 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-18 06:21:17.500574 | orchestrator | 2026-02-18 06:21:17.500590 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-18 06:21:17.500607 | orchestrator | Wednesday 18 February 2026 06:20:42 +0000 (0:00:02.042) 0:29:31.693 **** 2026-02-18 06:21:17.500622 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:21:17.500637 | orchestrator | 2026-02-18 06:21:17.500652 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-18 06:21:17.500668 | orchestrator | Wednesday 18 February 2026 06:20:44 +0000 (0:00:01.553) 0:29:33.247 **** 2026-02-18 06:21:17.500684 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.500725 | orchestrator | 2026-02-18 06:21:17.500742 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-18 06:21:17.500758 | orchestrator | Wednesday 18 February 2026 06:20:45 +0000 (0:00:00.789) 0:29:34.036 **** 2026-02-18 06:21:17.500774 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.500790 | orchestrator | 2026-02-18 06:21:17.500805 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-18 06:21:17.500844 | orchestrator | Wednesday 18 February 2026 06:20:45 +0000 (0:00:00.823) 0:29:34.860 **** 2026-02-18 06:21:17.500863 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-18 06:21:17.500880 | orchestrator | 2026-02-18 06:21:17.500896 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-18 06:21:17.500912 | orchestrator | Wednesday 18 February 2026 06:20:47 +0000 (0:00:01.123) 0:29:35.983 **** 2026-02-18 06:21:17.500928 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.500944 | orchestrator | 2026-02-18 06:21:17.500960 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-18 06:21:17.500977 | orchestrator | Wednesday 18 February 2026 06:20:48 +0000 (0:00:01.152) 0:29:37.136 **** 2026-02-18 06:21:17.500994 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.501010 | orchestrator | 2026-02-18 06:21:17.501026 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-18 06:21:17.501043 | orchestrator | Wednesday 18 February 2026 06:20:49 +0000 (0:00:01.158) 0:29:38.295 **** 2026-02-18 06:21:17.501059 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-18 06:21:17.501074 | orchestrator | 2026-02-18 06:21:17.501091 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-18 06:21:17.501108 | orchestrator | Wednesday 18 February 2026 06:20:50 +0000 (0:00:01.236) 0:29:39.532 **** 2026-02-18 06:21:17.501123 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:21:17.501139 | orchestrator | 2026-02-18 06:21:17.501149 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-18 06:21:17.501159 | orchestrator | Wednesday 18 February 2026 06:20:52 +0000 (0:00:02.064) 0:29:41.596 **** 2026-02-18 06:21:17.501168 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:21:17.501179 | orchestrator | 2026-02-18 06:21:17.501196 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-18 06:21:17.501211 | orchestrator | Wednesday 18 February 2026 06:20:54 +0000 (0:00:01.978) 0:29:43.575 **** 2026-02-18 06:21:17.501227 | orchestrator | ok: [testbed-node-1] 2026-02-18 06:21:17.501243 | orchestrator | 2026-02-18 06:21:17.501259 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-18 06:21:17.501275 | orchestrator | Wednesday 18 February 2026 06:20:57 +0000 (0:00:02.556) 0:29:46.131 **** 2026-02-18 06:21:17.501291 | orchestrator | changed: [testbed-node-1] 2026-02-18 06:21:17.501319 | orchestrator | 2026-02-18 06:21:17.501337 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-18 06:21:17.501354 | orchestrator | Wednesday 18 February 2026 06:21:00 +0000 (0:00:03.582) 0:29:49.714 **** 2026-02-18 06:21:17.501379 | orchestrator | skipping: [testbed-node-1] 2026-02-18 06:21:17.501396 | orchestrator | 2026-02-18 06:21:17.501411 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-18 06:21:17.501427 | orchestrator | 2026-02-18 06:21:17.501443 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-18 06:21:17.501459 | orchestrator | Wednesday 18 February 2026 06:21:01 +0000 (0:00:01.031) 0:29:50.745 **** 2026-02-18 06:21:17.501475 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:21:17.501491 | orchestrator | 2026-02-18 06:21:17.501507 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-18 06:21:17.501523 | orchestrator | Wednesday 18 February 2026 06:21:04 +0000 (0:00:02.464) 0:29:53.209 **** 2026-02-18 06:21:17.501539 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:21:17.501556 | orchestrator | 2026-02-18 06:21:17.501572 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:21:17.501589 | orchestrator | Wednesday 18 February 2026 06:21:06 +0000 (0:00:02.049) 0:29:55.259 **** 2026-02-18 06:21:17.501604 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-18 06:21:17.501621 | orchestrator | 2026-02-18 06:21:17.501637 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:21:17.501653 | orchestrator | Wednesday 18 February 2026 06:21:07 +0000 (0:00:01.167) 0:29:56.427 **** 2026-02-18 06:21:17.501670 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:17.501686 | orchestrator | 2026-02-18 06:21:17.501724 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:21:17.501741 | orchestrator | Wednesday 18 February 2026 06:21:09 +0000 (0:00:01.511) 0:29:57.938 **** 2026-02-18 06:21:17.501758 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:17.501775 | orchestrator | 2026-02-18 06:21:17.501792 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:21:17.501808 | orchestrator | Wednesday 18 February 2026 06:21:10 +0000 (0:00:01.209) 0:29:59.148 **** 2026-02-18 06:21:17.501826 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:17.501841 | orchestrator | 2026-02-18 06:21:17.501857 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:21:17.501866 | orchestrator | Wednesday 18 February 2026 06:21:11 +0000 (0:00:01.420) 0:30:00.568 **** 2026-02-18 06:21:17.501876 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:17.501885 | orchestrator | 2026-02-18 06:21:17.501894 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:21:17.501904 | orchestrator | Wednesday 18 February 2026 06:21:12 +0000 (0:00:01.179) 0:30:01.748 **** 2026-02-18 06:21:17.501913 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:17.501922 | orchestrator | 2026-02-18 06:21:17.501938 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:21:17.501954 | orchestrator | Wednesday 18 February 2026 06:21:14 +0000 (0:00:01.170) 0:30:02.919 **** 2026-02-18 06:21:17.501970 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:17.501986 | orchestrator | 2026-02-18 06:21:17.502002 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:21:17.502089 | orchestrator | Wednesday 18 February 2026 06:21:15 +0000 (0:00:01.142) 0:30:04.061 **** 2026-02-18 06:21:17.502111 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:17.502129 | orchestrator | 2026-02-18 06:21:17.502144 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:21:17.502161 | orchestrator | Wednesday 18 February 2026 06:21:16 +0000 (0:00:01.164) 0:30:05.226 **** 2026-02-18 06:21:17.502178 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:17.502194 | orchestrator | 2026-02-18 06:21:17.502221 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:21:42.987089 | orchestrator | Wednesday 18 February 2026 06:21:17 +0000 (0:00:01.134) 0:30:06.361 **** 2026-02-18 06:21:42.987203 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:21:42.987221 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:21:42.987233 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:21:42.987245 | orchestrator | 2026-02-18 06:21:42.987257 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:21:42.987268 | orchestrator | Wednesday 18 February 2026 06:21:19 +0000 (0:00:02.042) 0:30:08.403 **** 2026-02-18 06:21:42.987279 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:42.987291 | orchestrator | 2026-02-18 06:21:42.987302 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:21:42.987313 | orchestrator | Wednesday 18 February 2026 06:21:20 +0000 (0:00:01.265) 0:30:09.669 **** 2026-02-18 06:21:42.987324 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:21:42.987334 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:21:42.987345 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:21:42.987356 | orchestrator | 2026-02-18 06:21:42.987367 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:21:42.987378 | orchestrator | Wednesday 18 February 2026 06:21:23 +0000 (0:00:03.197) 0:30:12.867 **** 2026-02-18 06:21:42.987390 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 06:21:42.987401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 06:21:42.987411 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 06:21:42.987422 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.987433 | orchestrator | 2026-02-18 06:21:42.987444 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:21:42.987455 | orchestrator | Wednesday 18 February 2026 06:21:25 +0000 (0:00:01.825) 0:30:14.692 **** 2026-02-18 06:21:42.987484 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:21:42.987499 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:21:42.987510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:21:42.987521 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.987532 | orchestrator | 2026-02-18 06:21:42.987543 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:21:42.987554 | orchestrator | Wednesday 18 February 2026 06:21:27 +0000 (0:00:02.022) 0:30:16.715 **** 2026-02-18 06:21:42.987567 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:42.987582 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:42.987617 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:42.987630 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.987643 | orchestrator | 2026-02-18 06:21:42.987656 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:21:42.987668 | orchestrator | Wednesday 18 February 2026 06:21:29 +0000 (0:00:01.217) 0:30:17.933 **** 2026-02-18 06:21:42.987702 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:21:21.297949', 'end': '2026-02-18 06:21:21.345080', 'delta': '0:00:00.047131', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:21:42.987720 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:21:22.195867', 'end': '2026-02-18 06:21:22.249691', 'delta': '0:00:00.053824', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:21:42.987765 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:21:22.760621', 'end': '2026-02-18 06:21:22.808651', 'delta': '0:00:00.048030', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:21:42.987777 | orchestrator | 2026-02-18 06:21:42.987790 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:21:42.987802 | orchestrator | Wednesday 18 February 2026 06:21:30 +0000 (0:00:01.192) 0:30:19.125 **** 2026-02-18 06:21:42.987815 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:42.987827 | orchestrator | 2026-02-18 06:21:42.987840 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:21:42.987852 | orchestrator | Wednesday 18 February 2026 06:21:31 +0000 (0:00:01.245) 0:30:20.370 **** 2026-02-18 06:21:42.987865 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.987877 | orchestrator | 2026-02-18 06:21:42.987889 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:21:42.987902 | orchestrator | Wednesday 18 February 2026 06:21:32 +0000 (0:00:01.241) 0:30:21.612 **** 2026-02-18 06:21:42.987922 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:42.987935 | orchestrator | 2026-02-18 06:21:42.987947 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:21:42.987960 | orchestrator | Wednesday 18 February 2026 06:21:33 +0000 (0:00:01.165) 0:30:22.778 **** 2026-02-18 06:21:42.987973 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:21:42.987985 | orchestrator | 2026-02-18 06:21:42.987996 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:21:42.988007 | orchestrator | Wednesday 18 February 2026 06:21:35 +0000 (0:00:02.039) 0:30:24.817 **** 2026-02-18 06:21:42.988017 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:42.988028 | orchestrator | 2026-02-18 06:21:42.988039 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:21:42.988049 | orchestrator | Wednesday 18 February 2026 06:21:37 +0000 (0:00:01.144) 0:30:25.962 **** 2026-02-18 06:21:42.988060 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.988071 | orchestrator | 2026-02-18 06:21:42.988082 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:21:42.988093 | orchestrator | Wednesday 18 February 2026 06:21:38 +0000 (0:00:01.139) 0:30:27.101 **** 2026-02-18 06:21:42.988103 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.988114 | orchestrator | 2026-02-18 06:21:42.988125 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:21:42.988135 | orchestrator | Wednesday 18 February 2026 06:21:39 +0000 (0:00:01.250) 0:30:28.352 **** 2026-02-18 06:21:42.988146 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.988157 | orchestrator | 2026-02-18 06:21:42.988168 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:21:42.988179 | orchestrator | Wednesday 18 February 2026 06:21:40 +0000 (0:00:01.130) 0:30:29.483 **** 2026-02-18 06:21:42.988189 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.988200 | orchestrator | 2026-02-18 06:21:42.988211 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:21:42.988222 | orchestrator | Wednesday 18 February 2026 06:21:41 +0000 (0:00:01.122) 0:30:30.605 **** 2026-02-18 06:21:42.988233 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:42.988243 | orchestrator | 2026-02-18 06:21:42.988262 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:21:50.278688 | orchestrator | Wednesday 18 February 2026 06:21:42 +0000 (0:00:01.239) 0:30:31.845 **** 2026-02-18 06:21:50.278954 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:50.278984 | orchestrator | 2026-02-18 06:21:50.278997 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:21:50.279009 | orchestrator | Wednesday 18 February 2026 06:21:44 +0000 (0:00:01.177) 0:30:33.022 **** 2026-02-18 06:21:50.279021 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:50.279032 | orchestrator | 2026-02-18 06:21:50.279043 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:21:50.279054 | orchestrator | Wednesday 18 February 2026 06:21:45 +0000 (0:00:01.186) 0:30:34.209 **** 2026-02-18 06:21:50.279066 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:50.279077 | orchestrator | 2026-02-18 06:21:50.279088 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:21:50.279100 | orchestrator | Wednesday 18 February 2026 06:21:46 +0000 (0:00:01.182) 0:30:35.391 **** 2026-02-18 06:21:50.279111 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:50.279123 | orchestrator | 2026-02-18 06:21:50.279134 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:21:50.279144 | orchestrator | Wednesday 18 February 2026 06:21:47 +0000 (0:00:01.138) 0:30:36.530 **** 2026-02-18 06:21:50.279158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:21:50.279261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd638dc9f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:21:50.279359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:21:50.279384 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:50.279395 | orchestrator | 2026-02-18 06:21:50.279407 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:21:50.279417 | orchestrator | Wednesday 18 February 2026 06:21:49 +0000 (0:00:01.390) 0:30:37.921 **** 2026-02-18 06:21:50.279428 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:50.279447 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.015981 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016260 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016284 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016302 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016366 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd638dc9f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d638dc9f-a3be-40fa-a76f-064f22b3f5a8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016403 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016420 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:21:58.016438 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:58.016458 | orchestrator | 2026-02-18 06:21:58.016478 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:21:58.016497 | orchestrator | Wednesday 18 February 2026 06:21:50 +0000 (0:00:01.223) 0:30:39.144 **** 2026-02-18 06:21:58.016513 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:58.016532 | orchestrator | 2026-02-18 06:21:58.016549 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:21:58.016565 | orchestrator | Wednesday 18 February 2026 06:21:51 +0000 (0:00:01.513) 0:30:40.657 **** 2026-02-18 06:21:58.016582 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:58.016600 | orchestrator | 2026-02-18 06:21:58.016616 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:21:58.016633 | orchestrator | Wednesday 18 February 2026 06:21:52 +0000 (0:00:01.169) 0:30:41.827 **** 2026-02-18 06:21:58.016651 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:21:58.016667 | orchestrator | 2026-02-18 06:21:58.016684 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:21:58.016701 | orchestrator | Wednesday 18 February 2026 06:21:54 +0000 (0:00:01.533) 0:30:43.361 **** 2026-02-18 06:21:58.016719 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:58.016736 | orchestrator | 2026-02-18 06:21:58.016787 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:21:58.016804 | orchestrator | Wednesday 18 February 2026 06:21:55 +0000 (0:00:01.159) 0:30:44.521 **** 2026-02-18 06:21:58.016820 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:58.016847 | orchestrator | 2026-02-18 06:21:58.016865 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:21:58.016881 | orchestrator | Wednesday 18 February 2026 06:21:56 +0000 (0:00:01.233) 0:30:45.754 **** 2026-02-18 06:21:58.016897 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:21:58.016913 | orchestrator | 2026-02-18 06:21:58.016930 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:21:58.016960 | orchestrator | Wednesday 18 February 2026 06:21:58 +0000 (0:00:01.129) 0:30:46.883 **** 2026-02-18 06:22:35.517474 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-18 06:22:35.517611 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-18 06:22:35.517636 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:22:35.517653 | orchestrator | 2026-02-18 06:22:35.517673 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:22:35.517692 | orchestrator | Wednesday 18 February 2026 06:22:00 +0000 (0:00:02.017) 0:30:48.900 **** 2026-02-18 06:22:35.517711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-18 06:22:35.517730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-18 06:22:35.517747 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-18 06:22:35.517766 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.517783 | orchestrator | 2026-02-18 06:22:35.517867 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:22:35.517887 | orchestrator | Wednesday 18 February 2026 06:22:01 +0000 (0:00:01.176) 0:30:50.077 **** 2026-02-18 06:22:35.517906 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.517925 | orchestrator | 2026-02-18 06:22:35.517942 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:22:35.517960 | orchestrator | Wednesday 18 February 2026 06:22:02 +0000 (0:00:01.186) 0:30:51.263 **** 2026-02-18 06:22:35.517978 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:22:35.517997 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:22:35.518078 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:22:35.518094 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:22:35.518108 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:22:35.518136 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:22:35.518149 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:22:35.518162 | orchestrator | 2026-02-18 06:22:35.518175 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:22:35.518188 | orchestrator | Wednesday 18 February 2026 06:22:04 +0000 (0:00:01.805) 0:30:53.068 **** 2026-02-18 06:22:35.518200 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:22:35.518212 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:22:35.518225 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:22:35.518238 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:22:35.518249 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:22:35.518260 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:22:35.518271 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:22:35.518282 | orchestrator | 2026-02-18 06:22:35.518293 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:22:35.518303 | orchestrator | Wednesday 18 February 2026 06:22:06 +0000 (0:00:02.307) 0:30:55.376 **** 2026-02-18 06:22:35.518341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-18 06:22:35.518354 | orchestrator | 2026-02-18 06:22:35.518365 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:22:35.518377 | orchestrator | Wednesday 18 February 2026 06:22:07 +0000 (0:00:01.131) 0:30:56.507 **** 2026-02-18 06:22:35.518387 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-18 06:22:35.518398 | orchestrator | 2026-02-18 06:22:35.518409 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:22:35.518420 | orchestrator | Wednesday 18 February 2026 06:22:08 +0000 (0:00:01.164) 0:30:57.672 **** 2026-02-18 06:22:35.518431 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.518442 | orchestrator | 2026-02-18 06:22:35.518453 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:22:35.518463 | orchestrator | Wednesday 18 February 2026 06:22:10 +0000 (0:00:01.608) 0:30:59.280 **** 2026-02-18 06:22:35.518474 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.518485 | orchestrator | 2026-02-18 06:22:35.518496 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:22:35.518507 | orchestrator | Wednesday 18 February 2026 06:22:11 +0000 (0:00:01.138) 0:31:00.418 **** 2026-02-18 06:22:35.518518 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.518528 | orchestrator | 2026-02-18 06:22:35.518539 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:22:35.518550 | orchestrator | Wednesday 18 February 2026 06:22:12 +0000 (0:00:01.198) 0:31:01.617 **** 2026-02-18 06:22:35.518561 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.518571 | orchestrator | 2026-02-18 06:22:35.518582 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:22:35.518593 | orchestrator | Wednesday 18 February 2026 06:22:13 +0000 (0:00:01.153) 0:31:02.771 **** 2026-02-18 06:22:35.518604 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.518614 | orchestrator | 2026-02-18 06:22:35.518625 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:22:35.518636 | orchestrator | Wednesday 18 February 2026 06:22:15 +0000 (0:00:01.622) 0:31:04.393 **** 2026-02-18 06:22:35.518647 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.518658 | orchestrator | 2026-02-18 06:22:35.518669 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:22:35.518700 | orchestrator | Wednesday 18 February 2026 06:22:16 +0000 (0:00:01.129) 0:31:05.523 **** 2026-02-18 06:22:35.518712 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.518723 | orchestrator | 2026-02-18 06:22:35.518734 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:22:35.518746 | orchestrator | Wednesday 18 February 2026 06:22:17 +0000 (0:00:01.178) 0:31:06.701 **** 2026-02-18 06:22:35.518763 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.518781 | orchestrator | 2026-02-18 06:22:35.518826 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:22:35.518846 | orchestrator | Wednesday 18 February 2026 06:22:19 +0000 (0:00:01.587) 0:31:08.289 **** 2026-02-18 06:22:35.518860 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.518871 | orchestrator | 2026-02-18 06:22:35.518882 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:22:35.518893 | orchestrator | Wednesday 18 February 2026 06:22:21 +0000 (0:00:01.612) 0:31:09.902 **** 2026-02-18 06:22:35.518904 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.518915 | orchestrator | 2026-02-18 06:22:35.518925 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:22:35.518936 | orchestrator | Wednesday 18 February 2026 06:22:21 +0000 (0:00:00.758) 0:31:10.660 **** 2026-02-18 06:22:35.518947 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.518958 | orchestrator | 2026-02-18 06:22:35.518969 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:22:35.518990 | orchestrator | Wednesday 18 February 2026 06:22:22 +0000 (0:00:00.804) 0:31:11.465 **** 2026-02-18 06:22:35.519001 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519012 | orchestrator | 2026-02-18 06:22:35.519022 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:22:35.519033 | orchestrator | Wednesday 18 February 2026 06:22:23 +0000 (0:00:00.833) 0:31:12.298 **** 2026-02-18 06:22:35.519044 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519055 | orchestrator | 2026-02-18 06:22:35.519066 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:22:35.519084 | orchestrator | Wednesday 18 February 2026 06:22:24 +0000 (0:00:00.759) 0:31:13.057 **** 2026-02-18 06:22:35.519095 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519106 | orchestrator | 2026-02-18 06:22:35.519117 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:22:35.519127 | orchestrator | Wednesday 18 February 2026 06:22:24 +0000 (0:00:00.805) 0:31:13.863 **** 2026-02-18 06:22:35.519138 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519149 | orchestrator | 2026-02-18 06:22:35.519160 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:22:35.519170 | orchestrator | Wednesday 18 February 2026 06:22:25 +0000 (0:00:00.817) 0:31:14.680 **** 2026-02-18 06:22:35.519181 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519192 | orchestrator | 2026-02-18 06:22:35.519203 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:22:35.519214 | orchestrator | Wednesday 18 February 2026 06:22:26 +0000 (0:00:00.765) 0:31:15.446 **** 2026-02-18 06:22:35.519224 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.519235 | orchestrator | 2026-02-18 06:22:35.519246 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:22:35.519256 | orchestrator | Wednesday 18 February 2026 06:22:27 +0000 (0:00:00.793) 0:31:16.240 **** 2026-02-18 06:22:35.519267 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.519278 | orchestrator | 2026-02-18 06:22:35.519289 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:22:35.519299 | orchestrator | Wednesday 18 February 2026 06:22:28 +0000 (0:00:00.812) 0:31:17.052 **** 2026-02-18 06:22:35.519310 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:22:35.519321 | orchestrator | 2026-02-18 06:22:35.519332 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:22:35.519342 | orchestrator | Wednesday 18 February 2026 06:22:29 +0000 (0:00:00.936) 0:31:17.989 **** 2026-02-18 06:22:35.519353 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519364 | orchestrator | 2026-02-18 06:22:35.519377 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:22:35.519396 | orchestrator | Wednesday 18 February 2026 06:22:29 +0000 (0:00:00.772) 0:31:18.761 **** 2026-02-18 06:22:35.519423 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519444 | orchestrator | 2026-02-18 06:22:35.519463 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:22:35.519480 | orchestrator | Wednesday 18 February 2026 06:22:30 +0000 (0:00:00.812) 0:31:19.574 **** 2026-02-18 06:22:35.519498 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519514 | orchestrator | 2026-02-18 06:22:35.519529 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:22:35.519546 | orchestrator | Wednesday 18 February 2026 06:22:31 +0000 (0:00:00.771) 0:31:20.346 **** 2026-02-18 06:22:35.519563 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519579 | orchestrator | 2026-02-18 06:22:35.519595 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:22:35.519611 | orchestrator | Wednesday 18 February 2026 06:22:32 +0000 (0:00:00.780) 0:31:21.127 **** 2026-02-18 06:22:35.519627 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519645 | orchestrator | 2026-02-18 06:22:35.519663 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:22:35.519692 | orchestrator | Wednesday 18 February 2026 06:22:33 +0000 (0:00:00.790) 0:31:21.917 **** 2026-02-18 06:22:35.519709 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519726 | orchestrator | 2026-02-18 06:22:35.519745 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:22:35.519763 | orchestrator | Wednesday 18 February 2026 06:22:33 +0000 (0:00:00.846) 0:31:22.764 **** 2026-02-18 06:22:35.519780 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:22:35.519846 | orchestrator | 2026-02-18 06:22:35.519867 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:22:35.519885 | orchestrator | Wednesday 18 February 2026 06:22:34 +0000 (0:00:00.803) 0:31:23.567 **** 2026-02-18 06:22:35.519918 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.761818 | orchestrator | 2026-02-18 06:23:23.761969 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:23:23.761983 | orchestrator | Wednesday 18 February 2026 06:22:35 +0000 (0:00:00.816) 0:31:24.384 **** 2026-02-18 06:23:23.761991 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762000 | orchestrator | 2026-02-18 06:23:23.762008 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:23:23.762076 | orchestrator | Wednesday 18 February 2026 06:22:36 +0000 (0:00:00.795) 0:31:25.180 **** 2026-02-18 06:23:23.762086 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762094 | orchestrator | 2026-02-18 06:23:23.762101 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:23:23.762109 | orchestrator | Wednesday 18 February 2026 06:22:37 +0000 (0:00:00.771) 0:31:25.952 **** 2026-02-18 06:23:23.762117 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762124 | orchestrator | 2026-02-18 06:23:23.762132 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:23:23.762139 | orchestrator | Wednesday 18 February 2026 06:22:37 +0000 (0:00:00.780) 0:31:26.732 **** 2026-02-18 06:23:23.762147 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762154 | orchestrator | 2026-02-18 06:23:23.762162 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:23:23.762169 | orchestrator | Wednesday 18 February 2026 06:22:38 +0000 (0:00:00.882) 0:31:27.615 **** 2026-02-18 06:23:23.762177 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:23.762185 | orchestrator | 2026-02-18 06:23:23.762193 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:23:23.762200 | orchestrator | Wednesday 18 February 2026 06:22:40 +0000 (0:00:01.625) 0:31:29.240 **** 2026-02-18 06:23:23.762207 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:23.762214 | orchestrator | 2026-02-18 06:23:23.762226 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:23:23.762238 | orchestrator | Wednesday 18 February 2026 06:22:42 +0000 (0:00:02.032) 0:31:31.273 **** 2026-02-18 06:23:23.762266 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-18 06:23:23.762281 | orchestrator | 2026-02-18 06:23:23.762293 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:23:23.762306 | orchestrator | Wednesday 18 February 2026 06:22:43 +0000 (0:00:01.133) 0:31:32.406 **** 2026-02-18 06:23:23.762317 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762325 | orchestrator | 2026-02-18 06:23:23.762332 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:23:23.762340 | orchestrator | Wednesday 18 February 2026 06:22:44 +0000 (0:00:01.195) 0:31:33.601 **** 2026-02-18 06:23:23.762347 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762354 | orchestrator | 2026-02-18 06:23:23.762361 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:23:23.762369 | orchestrator | Wednesday 18 February 2026 06:22:45 +0000 (0:00:01.159) 0:31:34.760 **** 2026-02-18 06:23:23.762377 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:23:23.762404 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:23:23.762413 | orchestrator | 2026-02-18 06:23:23.762421 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:23:23.762430 | orchestrator | Wednesday 18 February 2026 06:22:47 +0000 (0:00:01.916) 0:31:36.677 **** 2026-02-18 06:23:23.762438 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:23.762446 | orchestrator | 2026-02-18 06:23:23.762454 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:23:23.762462 | orchestrator | Wednesday 18 February 2026 06:22:49 +0000 (0:00:01.494) 0:31:38.172 **** 2026-02-18 06:23:23.762471 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762479 | orchestrator | 2026-02-18 06:23:23.762488 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:23:23.762496 | orchestrator | Wednesday 18 February 2026 06:22:50 +0000 (0:00:01.180) 0:31:39.352 **** 2026-02-18 06:23:23.762504 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762511 | orchestrator | 2026-02-18 06:23:23.762518 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:23:23.762525 | orchestrator | Wednesday 18 February 2026 06:22:51 +0000 (0:00:00.782) 0:31:40.135 **** 2026-02-18 06:23:23.762532 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762540 | orchestrator | 2026-02-18 06:23:23.762547 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:23:23.762554 | orchestrator | Wednesday 18 February 2026 06:22:52 +0000 (0:00:00.853) 0:31:40.988 **** 2026-02-18 06:23:23.762561 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-18 06:23:23.762568 | orchestrator | 2026-02-18 06:23:23.762575 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:23:23.762583 | orchestrator | Wednesday 18 February 2026 06:22:53 +0000 (0:00:01.250) 0:31:42.239 **** 2026-02-18 06:23:23.762590 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:23.762597 | orchestrator | 2026-02-18 06:23:23.762604 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:23:23.762611 | orchestrator | Wednesday 18 February 2026 06:22:55 +0000 (0:00:01.745) 0:31:43.984 **** 2026-02-18 06:23:23.762618 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:23:23.762626 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:23:23.762633 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:23:23.762640 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762647 | orchestrator | 2026-02-18 06:23:23.762654 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:23:23.762661 | orchestrator | Wednesday 18 February 2026 06:22:56 +0000 (0:00:01.159) 0:31:45.144 **** 2026-02-18 06:23:23.762684 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762691 | orchestrator | 2026-02-18 06:23:23.762698 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:23:23.762706 | orchestrator | Wednesday 18 February 2026 06:22:57 +0000 (0:00:01.120) 0:31:46.265 **** 2026-02-18 06:23:23.762716 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762729 | orchestrator | 2026-02-18 06:23:23.762740 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:23:23.762752 | orchestrator | Wednesday 18 February 2026 06:22:58 +0000 (0:00:01.193) 0:31:47.458 **** 2026-02-18 06:23:23.762763 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762774 | orchestrator | 2026-02-18 06:23:23.762785 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:23:23.762796 | orchestrator | Wednesday 18 February 2026 06:22:59 +0000 (0:00:01.213) 0:31:48.671 **** 2026-02-18 06:23:23.762807 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762819 | orchestrator | 2026-02-18 06:23:23.762840 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:23:23.762888 | orchestrator | Wednesday 18 February 2026 06:23:00 +0000 (0:00:01.201) 0:31:49.873 **** 2026-02-18 06:23:23.762906 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.762913 | orchestrator | 2026-02-18 06:23:23.762921 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:23:23.762928 | orchestrator | Wednesday 18 February 2026 06:23:01 +0000 (0:00:00.788) 0:31:50.662 **** 2026-02-18 06:23:23.762935 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:23.762943 | orchestrator | 2026-02-18 06:23:23.762950 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:23:23.762957 | orchestrator | Wednesday 18 February 2026 06:23:03 +0000 (0:00:02.157) 0:31:52.820 **** 2026-02-18 06:23:23.762964 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:23.762971 | orchestrator | 2026-02-18 06:23:23.762978 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:23:23.762992 | orchestrator | Wednesday 18 February 2026 06:23:04 +0000 (0:00:00.799) 0:31:53.619 **** 2026-02-18 06:23:23.762999 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-18 06:23:23.763006 | orchestrator | 2026-02-18 06:23:23.763013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:23:23.763020 | orchestrator | Wednesday 18 February 2026 06:23:05 +0000 (0:00:01.186) 0:31:54.805 **** 2026-02-18 06:23:23.763028 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763035 | orchestrator | 2026-02-18 06:23:23.763042 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:23:23.763049 | orchestrator | Wednesday 18 February 2026 06:23:07 +0000 (0:00:01.175) 0:31:55.981 **** 2026-02-18 06:23:23.763057 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763064 | orchestrator | 2026-02-18 06:23:23.763071 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:23:23.763078 | orchestrator | Wednesday 18 February 2026 06:23:08 +0000 (0:00:01.205) 0:31:57.187 **** 2026-02-18 06:23:23.763085 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763093 | orchestrator | 2026-02-18 06:23:23.763100 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:23:23.763107 | orchestrator | Wednesday 18 February 2026 06:23:09 +0000 (0:00:01.141) 0:31:58.328 **** 2026-02-18 06:23:23.763114 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763121 | orchestrator | 2026-02-18 06:23:23.763128 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:23:23.763136 | orchestrator | Wednesday 18 February 2026 06:23:10 +0000 (0:00:01.226) 0:31:59.555 **** 2026-02-18 06:23:23.763143 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763150 | orchestrator | 2026-02-18 06:23:23.763157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:23:23.763164 | orchestrator | Wednesday 18 February 2026 06:23:11 +0000 (0:00:01.260) 0:32:00.816 **** 2026-02-18 06:23:23.763171 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763179 | orchestrator | 2026-02-18 06:23:23.763186 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:23:23.763193 | orchestrator | Wednesday 18 February 2026 06:23:13 +0000 (0:00:01.184) 0:32:02.000 **** 2026-02-18 06:23:23.763200 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763207 | orchestrator | 2026-02-18 06:23:23.763214 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:23:23.763221 | orchestrator | Wednesday 18 February 2026 06:23:14 +0000 (0:00:01.224) 0:32:03.225 **** 2026-02-18 06:23:23.763228 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:23.763235 | orchestrator | 2026-02-18 06:23:23.763243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:23:23.763250 | orchestrator | Wednesday 18 February 2026 06:23:15 +0000 (0:00:01.189) 0:32:04.414 **** 2026-02-18 06:23:23.763257 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:23.763280 | orchestrator | 2026-02-18 06:23:23.763287 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:23:23.763295 | orchestrator | Wednesday 18 February 2026 06:23:16 +0000 (0:00:00.836) 0:32:05.251 **** 2026-02-18 06:23:23.763302 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-18 06:23:23.763309 | orchestrator | 2026-02-18 06:23:23.763316 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:23:23.763323 | orchestrator | Wednesday 18 February 2026 06:23:17 +0000 (0:00:01.121) 0:32:06.373 **** 2026-02-18 06:23:23.763331 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-18 06:23:23.763338 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-18 06:23:23.763345 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-18 06:23:23.763352 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-18 06:23:23.763360 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-18 06:23:23.763367 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-18 06:23:23.763380 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-18 06:23:59.651020 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:23:59.651148 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:23:59.651166 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:23:59.651178 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:23:59.651190 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:23:59.651202 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:23:59.651214 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:23:59.651226 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-18 06:23:59.651239 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-18 06:23:59.651251 | orchestrator | 2026-02-18 06:23:59.651264 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:23:59.651276 | orchestrator | Wednesday 18 February 2026 06:23:23 +0000 (0:00:06.247) 0:32:12.620 **** 2026-02-18 06:23:59.651288 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651300 | orchestrator | 2026-02-18 06:23:59.651312 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:23:59.651324 | orchestrator | Wednesday 18 February 2026 06:23:24 +0000 (0:00:00.792) 0:32:13.413 **** 2026-02-18 06:23:59.651336 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651348 | orchestrator | 2026-02-18 06:23:59.651360 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:23:59.651372 | orchestrator | Wednesday 18 February 2026 06:23:25 +0000 (0:00:00.793) 0:32:14.207 **** 2026-02-18 06:23:59.651385 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651397 | orchestrator | 2026-02-18 06:23:59.651408 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:23:59.651438 | orchestrator | Wednesday 18 February 2026 06:23:26 +0000 (0:00:00.855) 0:32:15.062 **** 2026-02-18 06:23:59.651450 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651462 | orchestrator | 2026-02-18 06:23:59.651473 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:23:59.651485 | orchestrator | Wednesday 18 February 2026 06:23:26 +0000 (0:00:00.778) 0:32:15.840 **** 2026-02-18 06:23:59.651497 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651509 | orchestrator | 2026-02-18 06:23:59.651522 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:23:59.651535 | orchestrator | Wednesday 18 February 2026 06:23:27 +0000 (0:00:00.832) 0:32:16.673 **** 2026-02-18 06:23:59.651548 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651560 | orchestrator | 2026-02-18 06:23:59.651573 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:23:59.651616 | orchestrator | Wednesday 18 February 2026 06:23:28 +0000 (0:00:00.778) 0:32:17.451 **** 2026-02-18 06:23:59.651629 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651641 | orchestrator | 2026-02-18 06:23:59.651654 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:23:59.651667 | orchestrator | Wednesday 18 February 2026 06:23:29 +0000 (0:00:00.833) 0:32:18.284 **** 2026-02-18 06:23:59.651680 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651692 | orchestrator | 2026-02-18 06:23:59.651704 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:23:59.651718 | orchestrator | Wednesday 18 February 2026 06:23:30 +0000 (0:00:00.769) 0:32:19.053 **** 2026-02-18 06:23:59.651730 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651742 | orchestrator | 2026-02-18 06:23:59.651755 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:23:59.651767 | orchestrator | Wednesday 18 February 2026 06:23:30 +0000 (0:00:00.818) 0:32:19.872 **** 2026-02-18 06:23:59.651780 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651792 | orchestrator | 2026-02-18 06:23:59.651805 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:23:59.651817 | orchestrator | Wednesday 18 February 2026 06:23:31 +0000 (0:00:00.749) 0:32:20.622 **** 2026-02-18 06:23:59.651830 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651843 | orchestrator | 2026-02-18 06:23:59.651856 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:23:59.651869 | orchestrator | Wednesday 18 February 2026 06:23:32 +0000 (0:00:00.791) 0:32:21.413 **** 2026-02-18 06:23:59.651880 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651918 | orchestrator | 2026-02-18 06:23:59.651932 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:23:59.651945 | orchestrator | Wednesday 18 February 2026 06:23:33 +0000 (0:00:00.782) 0:32:22.196 **** 2026-02-18 06:23:59.651958 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.651970 | orchestrator | 2026-02-18 06:23:59.651981 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:23:59.651993 | orchestrator | Wednesday 18 February 2026 06:23:34 +0000 (0:00:00.867) 0:32:23.063 **** 2026-02-18 06:23:59.652004 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652015 | orchestrator | 2026-02-18 06:23:59.652026 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:23:59.652038 | orchestrator | Wednesday 18 February 2026 06:23:34 +0000 (0:00:00.812) 0:32:23.876 **** 2026-02-18 06:23:59.652050 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652061 | orchestrator | 2026-02-18 06:23:59.652072 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:23:59.652083 | orchestrator | Wednesday 18 February 2026 06:23:35 +0000 (0:00:00.896) 0:32:24.772 **** 2026-02-18 06:23:59.652095 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652107 | orchestrator | 2026-02-18 06:23:59.652118 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:23:59.652130 | orchestrator | Wednesday 18 February 2026 06:23:36 +0000 (0:00:00.827) 0:32:25.599 **** 2026-02-18 06:23:59.652165 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652178 | orchestrator | 2026-02-18 06:23:59.652188 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:23:59.652199 | orchestrator | Wednesday 18 February 2026 06:23:37 +0000 (0:00:00.821) 0:32:26.421 **** 2026-02-18 06:23:59.652209 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652219 | orchestrator | 2026-02-18 06:23:59.652228 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:23:59.652238 | orchestrator | Wednesday 18 February 2026 06:23:38 +0000 (0:00:00.791) 0:32:27.213 **** 2026-02-18 06:23:59.652258 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652268 | orchestrator | 2026-02-18 06:23:59.652277 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:23:59.652287 | orchestrator | Wednesday 18 February 2026 06:23:39 +0000 (0:00:00.837) 0:32:28.051 **** 2026-02-18 06:23:59.652296 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652306 | orchestrator | 2026-02-18 06:23:59.652316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:23:59.652326 | orchestrator | Wednesday 18 February 2026 06:23:39 +0000 (0:00:00.808) 0:32:28.859 **** 2026-02-18 06:23:59.652335 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652345 | orchestrator | 2026-02-18 06:23:59.652355 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:23:59.652364 | orchestrator | Wednesday 18 February 2026 06:23:40 +0000 (0:00:00.770) 0:32:29.630 **** 2026-02-18 06:23:59.652374 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:23:59.652384 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:23:59.652393 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:23:59.652403 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652412 | orchestrator | 2026-02-18 06:23:59.652422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:23:59.652439 | orchestrator | Wednesday 18 February 2026 06:23:41 +0000 (0:00:01.047) 0:32:30.678 **** 2026-02-18 06:23:59.652449 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:23:59.652459 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:23:59.652469 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:23:59.652479 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652489 | orchestrator | 2026-02-18 06:23:59.652498 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:23:59.652508 | orchestrator | Wednesday 18 February 2026 06:23:42 +0000 (0:00:01.132) 0:32:31.810 **** 2026-02-18 06:23:59.652518 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-18 06:23:59.652528 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-18 06:23:59.652537 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-18 06:23:59.652547 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652556 | orchestrator | 2026-02-18 06:23:59.652566 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:23:59.652576 | orchestrator | Wednesday 18 February 2026 06:23:43 +0000 (0:00:01.059) 0:32:32.869 **** 2026-02-18 06:23:59.652586 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652595 | orchestrator | 2026-02-18 06:23:59.652605 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:23:59.652615 | orchestrator | Wednesday 18 February 2026 06:23:44 +0000 (0:00:00.808) 0:32:33.678 **** 2026-02-18 06:23:59.652625 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-18 06:23:59.652635 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652645 | orchestrator | 2026-02-18 06:23:59.652654 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:23:59.652664 | orchestrator | Wednesday 18 February 2026 06:23:45 +0000 (0:00:00.919) 0:32:34.597 **** 2026-02-18 06:23:59.652674 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:59.652684 | orchestrator | 2026-02-18 06:23:59.652694 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:23:59.652703 | orchestrator | Wednesday 18 February 2026 06:23:47 +0000 (0:00:01.571) 0:32:36.169 **** 2026-02-18 06:23:59.652713 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:23:59.652724 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:23:59.652734 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-18 06:23:59.652751 | orchestrator | 2026-02-18 06:23:59.652761 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-18 06:23:59.652771 | orchestrator | Wednesday 18 February 2026 06:23:48 +0000 (0:00:01.353) 0:32:37.523 **** 2026-02-18 06:23:59.652781 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-18 06:23:59.652790 | orchestrator | 2026-02-18 06:23:59.652800 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-18 06:23:59.652811 | orchestrator | Wednesday 18 February 2026 06:23:49 +0000 (0:00:01.184) 0:32:38.708 **** 2026-02-18 06:23:59.652821 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:23:59.652832 | orchestrator | 2026-02-18 06:23:59.652842 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-18 06:23:59.652852 | orchestrator | Wednesday 18 February 2026 06:23:51 +0000 (0:00:01.528) 0:32:40.236 **** 2026-02-18 06:23:59.652861 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:23:59.652871 | orchestrator | 2026-02-18 06:23:59.652881 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-18 06:23:59.652911 | orchestrator | Wednesday 18 February 2026 06:23:52 +0000 (0:00:01.162) 0:32:41.399 **** 2026-02-18 06:23:59.652922 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:23:59.652932 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:23:59.652949 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:24:46.628291 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-18 06:24:46.628404 | orchestrator | 2026-02-18 06:24:46.628421 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-18 06:24:46.628433 | orchestrator | Wednesday 18 February 2026 06:23:59 +0000 (0:00:07.109) 0:32:48.508 **** 2026-02-18 06:24:46.628448 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:24:46.628468 | orchestrator | 2026-02-18 06:24:46.628487 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-18 06:24:46.628504 | orchestrator | Wednesday 18 February 2026 06:24:00 +0000 (0:00:01.153) 0:32:49.662 **** 2026-02-18 06:24:46.628523 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-18 06:24:46.628542 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-18 06:24:46.628559 | orchestrator | 2026-02-18 06:24:46.628576 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:24:46.628595 | orchestrator | Wednesday 18 February 2026 06:24:04 +0000 (0:00:03.293) 0:32:52.956 **** 2026-02-18 06:24:46.628613 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-18 06:24:46.628631 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-18 06:24:46.628649 | orchestrator | 2026-02-18 06:24:46.628668 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-18 06:24:46.628687 | orchestrator | Wednesday 18 February 2026 06:24:06 +0000 (0:00:02.038) 0:32:54.995 **** 2026-02-18 06:24:46.628705 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:24:46.628724 | orchestrator | 2026-02-18 06:24:46.628738 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-18 06:24:46.628749 | orchestrator | Wednesday 18 February 2026 06:24:07 +0000 (0:00:01.529) 0:32:56.524 **** 2026-02-18 06:24:46.628760 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:24:46.628771 | orchestrator | 2026-02-18 06:24:46.628782 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-18 06:24:46.628835 | orchestrator | Wednesday 18 February 2026 06:24:08 +0000 (0:00:00.784) 0:32:57.309 **** 2026-02-18 06:24:46.628849 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:24:46.628861 | orchestrator | 2026-02-18 06:24:46.628873 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-18 06:24:46.628886 | orchestrator | Wednesday 18 February 2026 06:24:09 +0000 (0:00:00.759) 0:32:58.069 **** 2026-02-18 06:24:46.628898 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-18 06:24:46.628936 | orchestrator | 2026-02-18 06:24:46.628949 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-18 06:24:46.628963 | orchestrator | Wednesday 18 February 2026 06:24:10 +0000 (0:00:01.279) 0:32:59.349 **** 2026-02-18 06:24:46.628974 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:24:46.628985 | orchestrator | 2026-02-18 06:24:46.628996 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-18 06:24:46.629008 | orchestrator | Wednesday 18 February 2026 06:24:11 +0000 (0:00:01.167) 0:33:00.516 **** 2026-02-18 06:24:46.629019 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:24:46.629030 | orchestrator | 2026-02-18 06:24:46.629041 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-18 06:24:46.629052 | orchestrator | Wednesday 18 February 2026 06:24:12 +0000 (0:00:01.201) 0:33:01.717 **** 2026-02-18 06:24:46.629063 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-18 06:24:46.629074 | orchestrator | 2026-02-18 06:24:46.629084 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-18 06:24:46.629095 | orchestrator | Wednesday 18 February 2026 06:24:14 +0000 (0:00:01.254) 0:33:02.973 **** 2026-02-18 06:24:46.629106 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:24:46.629116 | orchestrator | 2026-02-18 06:24:46.629127 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-18 06:24:46.629138 | orchestrator | Wednesday 18 February 2026 06:24:16 +0000 (0:00:02.068) 0:33:05.041 **** 2026-02-18 06:24:46.629149 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:24:46.629159 | orchestrator | 2026-02-18 06:24:46.629170 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-18 06:24:46.629181 | orchestrator | Wednesday 18 February 2026 06:24:18 +0000 (0:00:02.045) 0:33:07.086 **** 2026-02-18 06:24:46.629192 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:24:46.629203 | orchestrator | 2026-02-18 06:24:46.629213 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-18 06:24:46.629224 | orchestrator | Wednesday 18 February 2026 06:24:20 +0000 (0:00:02.377) 0:33:09.464 **** 2026-02-18 06:24:46.629235 | orchestrator | changed: [testbed-node-2] 2026-02-18 06:24:46.629246 | orchestrator | 2026-02-18 06:24:46.629257 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-18 06:24:46.629267 | orchestrator | Wednesday 18 February 2026 06:24:23 +0000 (0:00:03.367) 0:33:12.831 **** 2026-02-18 06:24:46.629278 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-18 06:24:46.629289 | orchestrator | 2026-02-18 06:24:46.629300 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-18 06:24:46.629310 | orchestrator | Wednesday 18 February 2026 06:24:25 +0000 (0:00:01.505) 0:33:14.337 **** 2026-02-18 06:24:46.629321 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:24:46.629332 | orchestrator | 2026-02-18 06:24:46.629342 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-18 06:24:46.629353 | orchestrator | Wednesday 18 February 2026 06:24:28 +0000 (0:00:02.787) 0:33:17.124 **** 2026-02-18 06:24:46.629364 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:24:46.629374 | orchestrator | 2026-02-18 06:24:46.629385 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-18 06:24:46.629396 | orchestrator | Wednesday 18 February 2026 06:24:30 +0000 (0:00:02.614) 0:33:19.738 **** 2026-02-18 06:24:46.629407 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:24:46.629417 | orchestrator | 2026-02-18 06:24:46.629429 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-18 06:24:46.629458 | orchestrator | Wednesday 18 February 2026 06:24:32 +0000 (0:00:01.345) 0:33:21.084 **** 2026-02-18 06:24:46.629470 | orchestrator | ok: [testbed-node-2] 2026-02-18 06:24:46.629481 | orchestrator | 2026-02-18 06:24:46.629492 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-18 06:24:46.629511 | orchestrator | Wednesday 18 February 2026 06:24:33 +0000 (0:00:01.144) 0:33:22.228 **** 2026-02-18 06:24:46.629522 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-18 06:24:46.629533 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-18 06:24:46.629544 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:24:46.629555 | orchestrator | 2026-02-18 06:24:46.629566 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-18 06:24:46.629577 | orchestrator | Wednesday 18 February 2026 06:24:34 +0000 (0:00:01.350) 0:33:23.579 **** 2026-02-18 06:24:46.629587 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-18 06:24:46.629598 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-18 06:24:46.629609 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-18 06:24:46.629620 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-18 06:24:46.629631 | orchestrator | skipping: [testbed-node-2] 2026-02-18 06:24:46.629642 | orchestrator | 2026-02-18 06:24:46.629653 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-18 06:24:46.629664 | orchestrator | 2026-02-18 06:24:46.629675 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:24:46.629685 | orchestrator | Wednesday 18 February 2026 06:24:36 +0000 (0:00:01.968) 0:33:25.548 **** 2026-02-18 06:24:46.629696 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:24:46.629707 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:24:46.629718 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:24:46.629729 | orchestrator | 2026-02-18 06:24:46.629740 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:24:46.629756 | orchestrator | Wednesday 18 February 2026 06:24:38 +0000 (0:00:01.639) 0:33:27.187 **** 2026-02-18 06:24:46.629767 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:24:46.629778 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:24:46.629811 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:24:46.629822 | orchestrator | 2026-02-18 06:24:46.629833 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-18 06:24:46.629844 | orchestrator | Wednesday 18 February 2026 06:24:40 +0000 (0:00:01.746) 0:33:28.934 **** 2026-02-18 06:24:46.629855 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:24:46.629866 | orchestrator | 2026-02-18 06:24:46.629877 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-18 06:24:46.629888 | orchestrator | Wednesday 18 February 2026 06:24:43 +0000 (0:00:02.956) 0:33:31.891 **** 2026-02-18 06:24:46.629899 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:24:46.629910 | orchestrator | 2026-02-18 06:24:46.629921 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-18 06:24:46.629932 | orchestrator | Wednesday 18 February 2026 06:24:46 +0000 (0:00:03.012) 0:33:34.903 **** 2026-02-18 06:24:46.629949 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-18T03:46:23.638395+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:46.629987 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-18T03:47:36.866440+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '33', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.443743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-18T03:47:41.017900+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '81', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.443988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-18T03:48:40.675045+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '65', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '59', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.444016 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-18T03:48:46.162063+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '65', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '59', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.444088 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-18T03:48:52.364058+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '65', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '61', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.789161 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-18T03:48:58.572002+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '182', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '61', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.789319 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-18T03:49:05.023987+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '65', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '63', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.789359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-18T03:49:17.084109+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '109', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '104', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.789387 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-18T03:50:04.060511+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '90', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 90, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:24:47.789410 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-18T03:50:12.848628+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '98', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 98, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:26:04.375814 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-18T03:50:21.844147+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '194', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 194, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:26:04.375992 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-18T03:50:30.926872+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '115', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 115, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:26:04.376062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-18T03:50:40.172290+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '122', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 122, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-18 06:26:04.376085 | orchestrator | 2026-02-18 06:26:04.376105 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-18 06:26:04.376127 | orchestrator | Wednesday 18 February 2026 06:24:48 +0000 (0:00:02.945) 0:33:37.848 **** 2026-02-18 06:26:04.376147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:26:04.376166 | orchestrator | 2026-02-18 06:26:04.376220 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-18 06:26:04.376240 | orchestrator | Wednesday 18 February 2026 06:24:51 +0000 (0:00:02.943) 0:33:40.792 **** 2026-02-18 06:26:04.376259 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-18 06:26:04.376280 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-18 06:26:04.376311 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-18 06:26:04.376331 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-18 06:26:04.376352 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-18 06:26:04.376370 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-18 06:26:04.376390 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-18 06:26:04.376402 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-18 06:26:04.376413 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-18 06:26:04.376424 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-18 06:26:04.376435 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-18 06:26:04.376446 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-18 06:26:04.376456 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-18 06:26:04.376467 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-18 06:26:04.376478 | orchestrator | 2026-02-18 06:26:04.376489 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-18 06:26:04.376511 | orchestrator | Wednesday 18 February 2026 06:26:04 +0000 (0:01:12.440) 0:34:53.232 **** 2026-02-18 06:26:34.406155 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-18 06:26:34.406267 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-18 06:26:34.406281 | orchestrator | 2026-02-18 06:26:34.406293 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-18 06:26:34.406304 | orchestrator | 2026-02-18 06:26:34.406314 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:26:34.406324 | orchestrator | Wednesday 18 February 2026 06:26:10 +0000 (0:00:05.929) 0:34:59.162 **** 2026-02-18 06:26:34.406334 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-18 06:26:34.406343 | orchestrator | 2026-02-18 06:26:34.406353 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:26:34.406363 | orchestrator | Wednesday 18 February 2026 06:26:11 +0000 (0:00:01.281) 0:35:00.443 **** 2026-02-18 06:26:34.406373 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406383 | orchestrator | 2026-02-18 06:26:34.406393 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:26:34.406403 | orchestrator | Wednesday 18 February 2026 06:26:13 +0000 (0:00:01.449) 0:35:01.893 **** 2026-02-18 06:26:34.406412 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406422 | orchestrator | 2026-02-18 06:26:34.406432 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:26:34.406442 | orchestrator | Wednesday 18 February 2026 06:26:14 +0000 (0:00:01.172) 0:35:03.066 **** 2026-02-18 06:26:34.406452 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406462 | orchestrator | 2026-02-18 06:26:34.406471 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:26:34.406481 | orchestrator | Wednesday 18 February 2026 06:26:15 +0000 (0:00:01.552) 0:35:04.619 **** 2026-02-18 06:26:34.406491 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406500 | orchestrator | 2026-02-18 06:26:34.406510 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:26:34.406534 | orchestrator | Wednesday 18 February 2026 06:26:16 +0000 (0:00:01.159) 0:35:05.778 **** 2026-02-18 06:26:34.406593 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406604 | orchestrator | 2026-02-18 06:26:34.406614 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:26:34.406623 | orchestrator | Wednesday 18 February 2026 06:26:18 +0000 (0:00:01.138) 0:35:06.917 **** 2026-02-18 06:26:34.406633 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406642 | orchestrator | 2026-02-18 06:26:34.406652 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:26:34.406663 | orchestrator | Wednesday 18 February 2026 06:26:19 +0000 (0:00:01.167) 0:35:08.084 **** 2026-02-18 06:26:34.406674 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:34.406686 | orchestrator | 2026-02-18 06:26:34.406697 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:26:34.406708 | orchestrator | Wednesday 18 February 2026 06:26:20 +0000 (0:00:01.162) 0:35:09.247 **** 2026-02-18 06:26:34.406719 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406730 | orchestrator | 2026-02-18 06:26:34.406741 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:26:34.406752 | orchestrator | Wednesday 18 February 2026 06:26:21 +0000 (0:00:01.133) 0:35:10.381 **** 2026-02-18 06:26:34.406763 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:26:34.406774 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:26:34.406783 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:26:34.406793 | orchestrator | 2026-02-18 06:26:34.406802 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:26:34.406812 | orchestrator | Wednesday 18 February 2026 06:26:23 +0000 (0:00:02.053) 0:35:12.434 **** 2026-02-18 06:26:34.406822 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:34.406831 | orchestrator | 2026-02-18 06:26:34.406841 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:26:34.406850 | orchestrator | Wednesday 18 February 2026 06:26:24 +0000 (0:00:01.239) 0:35:13.674 **** 2026-02-18 06:26:34.406860 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:26:34.406869 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:26:34.406878 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:26:34.406888 | orchestrator | 2026-02-18 06:26:34.406897 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:26:34.406907 | orchestrator | Wednesday 18 February 2026 06:26:28 +0000 (0:00:03.256) 0:35:16.930 **** 2026-02-18 06:26:34.406916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 06:26:34.406926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 06:26:34.406935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 06:26:34.406945 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:34.406954 | orchestrator | 2026-02-18 06:26:34.406964 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:26:34.406973 | orchestrator | Wednesday 18 February 2026 06:26:29 +0000 (0:00:01.853) 0:35:18.784 **** 2026-02-18 06:26:34.406985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:26:34.407012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:26:34.407023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:26:34.407041 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:34.407051 | orchestrator | 2026-02-18 06:26:34.407061 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:26:34.407070 | orchestrator | Wednesday 18 February 2026 06:26:31 +0000 (0:00:02.089) 0:35:20.873 **** 2026-02-18 06:26:34.407082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:34.407095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:34.407111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:34.407122 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:34.407132 | orchestrator | 2026-02-18 06:26:34.407141 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:26:34.407151 | orchestrator | Wednesday 18 February 2026 06:26:33 +0000 (0:00:01.175) 0:35:22.049 **** 2026-02-18 06:26:34.407163 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:26:25.310001', 'end': '2026-02-18 06:26:25.359594', 'delta': '0:00:00.049593', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:26:34.407176 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:26:26.282959', 'end': '2026-02-18 06:26:26.338590', 'delta': '0:00:00.055631', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:26:34.407194 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:26:26.858942', 'end': '2026-02-18 06:26:26.899015', 'delta': '0:00:00.040073', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:26:52.220990 | orchestrator | 2026-02-18 06:26:52.221092 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:26:52.221105 | orchestrator | Wednesday 18 February 2026 06:26:34 +0000 (0:00:01.218) 0:35:23.267 **** 2026-02-18 06:26:52.221112 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:52.221120 | orchestrator | 2026-02-18 06:26:52.221127 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:26:52.221135 | orchestrator | Wednesday 18 February 2026 06:26:35 +0000 (0:00:01.338) 0:35:24.605 **** 2026-02-18 06:26:52.221142 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:52.221149 | orchestrator | 2026-02-18 06:26:52.221156 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:26:52.221163 | orchestrator | Wednesday 18 February 2026 06:26:36 +0000 (0:00:01.249) 0:35:25.855 **** 2026-02-18 06:26:52.221170 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:52.221176 | orchestrator | 2026-02-18 06:26:52.221183 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:26:52.221190 | orchestrator | Wednesday 18 February 2026 06:26:38 +0000 (0:00:01.149) 0:35:27.005 **** 2026-02-18 06:26:52.221197 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:26:52.221203 | orchestrator | 2026-02-18 06:26:52.221210 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:26:52.221217 | orchestrator | Wednesday 18 February 2026 06:26:40 +0000 (0:00:02.012) 0:35:29.017 **** 2026-02-18 06:26:52.221224 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:52.221230 | orchestrator | 2026-02-18 06:26:52.221237 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:26:52.221243 | orchestrator | Wednesday 18 February 2026 06:26:41 +0000 (0:00:01.182) 0:35:30.200 **** 2026-02-18 06:26:52.221249 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:52.221255 | orchestrator | 2026-02-18 06:26:52.221277 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:26:52.221283 | orchestrator | Wednesday 18 February 2026 06:26:42 +0000 (0:00:01.168) 0:35:31.368 **** 2026-02-18 06:26:52.221290 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:52.221296 | orchestrator | 2026-02-18 06:26:52.221302 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:26:52.221308 | orchestrator | Wednesday 18 February 2026 06:26:43 +0000 (0:00:01.296) 0:35:32.665 **** 2026-02-18 06:26:52.221314 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:52.221320 | orchestrator | 2026-02-18 06:26:52.221327 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:26:52.221333 | orchestrator | Wednesday 18 February 2026 06:26:44 +0000 (0:00:01.126) 0:35:33.792 **** 2026-02-18 06:26:52.221339 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:52.221346 | orchestrator | 2026-02-18 06:26:52.221352 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:26:52.221359 | orchestrator | Wednesday 18 February 2026 06:26:46 +0000 (0:00:01.094) 0:35:34.886 **** 2026-02-18 06:26:52.221365 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:52.221372 | orchestrator | 2026-02-18 06:26:52.221379 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:26:52.221385 | orchestrator | Wednesday 18 February 2026 06:26:47 +0000 (0:00:01.211) 0:35:36.098 **** 2026-02-18 06:26:52.221391 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:52.221397 | orchestrator | 2026-02-18 06:26:52.221403 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:26:52.221409 | orchestrator | Wednesday 18 February 2026 06:26:48 +0000 (0:00:01.199) 0:35:37.298 **** 2026-02-18 06:26:52.221436 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:52.221443 | orchestrator | 2026-02-18 06:26:52.221449 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:26:52.221456 | orchestrator | Wednesday 18 February 2026 06:26:49 +0000 (0:00:01.216) 0:35:38.514 **** 2026-02-18 06:26:52.221462 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:52.221469 | orchestrator | 2026-02-18 06:26:52.221475 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:26:52.221483 | orchestrator | Wednesday 18 February 2026 06:26:50 +0000 (0:00:01.180) 0:35:39.695 **** 2026-02-18 06:26:52.221489 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:26:52.221496 | orchestrator | 2026-02-18 06:26:52.221556 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:26:52.221565 | orchestrator | Wednesday 18 February 2026 06:26:51 +0000 (0:00:01.146) 0:35:40.841 **** 2026-02-18 06:26:52.221575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:52.221606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}})  2026-02-18 06:26:52.221617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:26:52.221634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}})  2026-02-18 06:26:52.221643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:52.221658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:52.221667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:26:52.221675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:52.221684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:26:52.221699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:53.604647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}})  2026-02-18 06:26:53.604741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}})  2026-02-18 06:26:53.604768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:53.604781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:26:53.604802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:53.604810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:26:53.604821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:26:53.604834 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:26:53.604843 | orchestrator | 2026-02-18 06:26:53.604850 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:26:53.604857 | orchestrator | Wednesday 18 February 2026 06:26:53 +0000 (0:00:01.404) 0:35:42.246 **** 2026-02-18 06:26:53.604958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:53.604966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:53.604974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:53.604988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:26:54.875657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:27:24.042659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:27:24.042791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:27:24.042807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:27:24.042825 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.042837 | orchestrator | 2026-02-18 06:27:24.042847 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:27:24.042857 | orchestrator | Wednesday 18 February 2026 06:26:54 +0000 (0:00:01.497) 0:35:43.744 **** 2026-02-18 06:27:24.042866 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:27:24.042875 | orchestrator | 2026-02-18 06:27:24.042884 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:27:24.042892 | orchestrator | Wednesday 18 February 2026 06:26:56 +0000 (0:00:01.498) 0:35:45.242 **** 2026-02-18 06:27:24.042900 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:27:24.042907 | orchestrator | 2026-02-18 06:27:24.042915 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:27:24.042923 | orchestrator | Wednesday 18 February 2026 06:26:57 +0000 (0:00:01.237) 0:35:46.479 **** 2026-02-18 06:27:24.042931 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:27:24.042939 | orchestrator | 2026-02-18 06:27:24.042947 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:27:24.042955 | orchestrator | Wednesday 18 February 2026 06:26:59 +0000 (0:00:01.524) 0:35:48.004 **** 2026-02-18 06:27:24.042963 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.042992 | orchestrator | 2026-02-18 06:27:24.043001 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:27:24.043009 | orchestrator | Wednesday 18 February 2026 06:27:00 +0000 (0:00:01.114) 0:35:49.118 **** 2026-02-18 06:27:24.043017 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043025 | orchestrator | 2026-02-18 06:27:24.043032 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:27:24.043040 | orchestrator | Wednesday 18 February 2026 06:27:01 +0000 (0:00:01.263) 0:35:50.381 **** 2026-02-18 06:27:24.043048 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043056 | orchestrator | 2026-02-18 06:27:24.043064 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:27:24.043072 | orchestrator | Wednesday 18 February 2026 06:27:02 +0000 (0:00:01.211) 0:35:51.592 **** 2026-02-18 06:27:24.043080 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-18 06:27:24.043088 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-18 06:27:24.043096 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-18 06:27:24.043104 | orchestrator | 2026-02-18 06:27:24.043111 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:27:24.043132 | orchestrator | Wednesday 18 February 2026 06:27:04 +0000 (0:00:02.098) 0:35:53.691 **** 2026-02-18 06:27:24.043140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 06:27:24.043148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 06:27:24.043156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 06:27:24.043165 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043173 | orchestrator | 2026-02-18 06:27:24.043181 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:27:24.043190 | orchestrator | Wednesday 18 February 2026 06:27:06 +0000 (0:00:01.287) 0:35:54.978 **** 2026-02-18 06:27:24.043213 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-18 06:27:24.043224 | orchestrator | 2026-02-18 06:27:24.043235 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:27:24.043245 | orchestrator | Wednesday 18 February 2026 06:27:07 +0000 (0:00:01.141) 0:35:56.120 **** 2026-02-18 06:27:24.043254 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043264 | orchestrator | 2026-02-18 06:27:24.043273 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:27:24.043286 | orchestrator | Wednesday 18 February 2026 06:27:08 +0000 (0:00:01.153) 0:35:57.274 **** 2026-02-18 06:27:24.043299 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043313 | orchestrator | 2026-02-18 06:27:24.043328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:27:24.043342 | orchestrator | Wednesday 18 February 2026 06:27:09 +0000 (0:00:01.151) 0:35:58.425 **** 2026-02-18 06:27:24.043355 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043369 | orchestrator | 2026-02-18 06:27:24.043379 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:27:24.043389 | orchestrator | Wednesday 18 February 2026 06:27:10 +0000 (0:00:01.172) 0:35:59.598 **** 2026-02-18 06:27:24.043398 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:27:24.043407 | orchestrator | 2026-02-18 06:27:24.043414 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:27:24.043422 | orchestrator | Wednesday 18 February 2026 06:27:11 +0000 (0:00:01.249) 0:36:00.848 **** 2026-02-18 06:27:24.043431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:27:24.043461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:27:24.043470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:27:24.043478 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043486 | orchestrator | 2026-02-18 06:27:24.043493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:27:24.043510 | orchestrator | Wednesday 18 February 2026 06:27:13 +0000 (0:00:01.475) 0:36:02.323 **** 2026-02-18 06:27:24.043517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:27:24.043525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:27:24.043533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:27:24.043541 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043549 | orchestrator | 2026-02-18 06:27:24.043557 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:27:24.043565 | orchestrator | Wednesday 18 February 2026 06:27:14 +0000 (0:00:01.410) 0:36:03.734 **** 2026-02-18 06:27:24.043573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:27:24.043581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:27:24.043588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:27:24.043596 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:27:24.043604 | orchestrator | 2026-02-18 06:27:24.043612 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:27:24.043620 | orchestrator | Wednesday 18 February 2026 06:27:16 +0000 (0:00:01.410) 0:36:05.144 **** 2026-02-18 06:27:24.043629 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:27:24.043638 | orchestrator | 2026-02-18 06:27:24.043647 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:27:24.043655 | orchestrator | Wednesday 18 February 2026 06:27:17 +0000 (0:00:01.150) 0:36:06.295 **** 2026-02-18 06:27:24.043664 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:27:24.043673 | orchestrator | 2026-02-18 06:27:24.043681 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:27:24.043690 | orchestrator | Wednesday 18 February 2026 06:27:18 +0000 (0:00:01.418) 0:36:07.713 **** 2026-02-18 06:27:24.043699 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:27:24.043707 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:27:24.043716 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:27:24.043725 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 06:27:24.043733 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:27:24.043742 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:27:24.043750 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:27:24.043759 | orchestrator | 2026-02-18 06:27:24.043768 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:27:24.043777 | orchestrator | Wednesday 18 February 2026 06:27:21 +0000 (0:00:02.179) 0:36:09.893 **** 2026-02-18 06:27:24.043785 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:27:24.043794 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:27:24.043808 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:27:24.043817 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 06:27:24.043826 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:27:24.043835 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:27:24.043844 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:27:24.043852 | orchestrator | 2026-02-18 06:27:24.043867 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-18 06:28:18.486728 | orchestrator | Wednesday 18 February 2026 06:27:24 +0000 (0:00:03.005) 0:36:12.898 **** 2026-02-18 06:28:18.486905 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.486938 | orchestrator | 2026-02-18 06:28:18.486949 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-18 06:28:18.486959 | orchestrator | Wednesday 18 February 2026 06:27:25 +0000 (0:00:01.511) 0:36:14.409 **** 2026-02-18 06:28:18.486968 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.486977 | orchestrator | 2026-02-18 06:28:18.486985 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-18 06:28:18.486994 | orchestrator | Wednesday 18 February 2026 06:27:26 +0000 (0:00:01.227) 0:36:15.637 **** 2026-02-18 06:28:18.487003 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487011 | orchestrator | 2026-02-18 06:28:18.487020 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-18 06:28:18.487030 | orchestrator | Wednesday 18 February 2026 06:27:28 +0000 (0:00:01.252) 0:36:16.890 **** 2026-02-18 06:28:18.487039 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-18 06:28:18.487049 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-18 06:28:18.487057 | orchestrator | 2026-02-18 06:28:18.487066 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:28:18.487074 | orchestrator | Wednesday 18 February 2026 06:27:33 +0000 (0:00:05.067) 0:36:21.957 **** 2026-02-18 06:28:18.487083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-18 06:28:18.487093 | orchestrator | 2026-02-18 06:28:18.487102 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:28:18.487110 | orchestrator | Wednesday 18 February 2026 06:27:34 +0000 (0:00:01.128) 0:36:23.086 **** 2026-02-18 06:28:18.487119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-18 06:28:18.487128 | orchestrator | 2026-02-18 06:28:18.487136 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:28:18.487145 | orchestrator | Wednesday 18 February 2026 06:27:35 +0000 (0:00:01.157) 0:36:24.244 **** 2026-02-18 06:28:18.487153 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487162 | orchestrator | 2026-02-18 06:28:18.487171 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:28:18.487179 | orchestrator | Wednesday 18 February 2026 06:27:36 +0000 (0:00:01.133) 0:36:25.377 **** 2026-02-18 06:28:18.487188 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487196 | orchestrator | 2026-02-18 06:28:18.487205 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:28:18.487213 | orchestrator | Wednesday 18 February 2026 06:27:38 +0000 (0:00:01.509) 0:36:26.886 **** 2026-02-18 06:28:18.487222 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487231 | orchestrator | 2026-02-18 06:28:18.487240 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:28:18.487248 | orchestrator | Wednesday 18 February 2026 06:27:39 +0000 (0:00:01.550) 0:36:28.437 **** 2026-02-18 06:28:18.487257 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487266 | orchestrator | 2026-02-18 06:28:18.487276 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:28:18.487287 | orchestrator | Wednesday 18 February 2026 06:27:41 +0000 (0:00:01.546) 0:36:29.983 **** 2026-02-18 06:28:18.487296 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487306 | orchestrator | 2026-02-18 06:28:18.487316 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:28:18.487326 | orchestrator | Wednesday 18 February 2026 06:27:42 +0000 (0:00:01.120) 0:36:31.104 **** 2026-02-18 06:28:18.487336 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487370 | orchestrator | 2026-02-18 06:28:18.487381 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:28:18.487391 | orchestrator | Wednesday 18 February 2026 06:27:43 +0000 (0:00:01.115) 0:36:32.219 **** 2026-02-18 06:28:18.487401 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487411 | orchestrator | 2026-02-18 06:28:18.487428 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:28:18.487438 | orchestrator | Wednesday 18 February 2026 06:27:44 +0000 (0:00:01.137) 0:36:33.357 **** 2026-02-18 06:28:18.487448 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487457 | orchestrator | 2026-02-18 06:28:18.487467 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:28:18.487477 | orchestrator | Wednesday 18 February 2026 06:27:46 +0000 (0:00:01.542) 0:36:34.899 **** 2026-02-18 06:28:18.487487 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487496 | orchestrator | 2026-02-18 06:28:18.487507 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:28:18.487517 | orchestrator | Wednesday 18 February 2026 06:27:47 +0000 (0:00:01.560) 0:36:36.459 **** 2026-02-18 06:28:18.487527 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487536 | orchestrator | 2026-02-18 06:28:18.487546 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:28:18.487556 | orchestrator | Wednesday 18 February 2026 06:27:48 +0000 (0:00:01.101) 0:36:37.562 **** 2026-02-18 06:28:18.487566 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487576 | orchestrator | 2026-02-18 06:28:18.487585 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:28:18.487608 | orchestrator | Wednesday 18 February 2026 06:27:49 +0000 (0:00:01.205) 0:36:38.767 **** 2026-02-18 06:28:18.487618 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487628 | orchestrator | 2026-02-18 06:28:18.487637 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:28:18.487645 | orchestrator | Wednesday 18 February 2026 06:27:51 +0000 (0:00:01.172) 0:36:39.940 **** 2026-02-18 06:28:18.487654 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487662 | orchestrator | 2026-02-18 06:28:18.487671 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:28:18.487680 | orchestrator | Wednesday 18 February 2026 06:27:52 +0000 (0:00:01.135) 0:36:41.075 **** 2026-02-18 06:28:18.487688 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487697 | orchestrator | 2026-02-18 06:28:18.487723 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:28:18.487732 | orchestrator | Wednesday 18 February 2026 06:27:53 +0000 (0:00:01.148) 0:36:42.224 **** 2026-02-18 06:28:18.487741 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487750 | orchestrator | 2026-02-18 06:28:18.487758 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:28:18.487767 | orchestrator | Wednesday 18 February 2026 06:27:54 +0000 (0:00:01.147) 0:36:43.372 **** 2026-02-18 06:28:18.487776 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487785 | orchestrator | 2026-02-18 06:28:18.487793 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:28:18.487802 | orchestrator | Wednesday 18 February 2026 06:27:55 +0000 (0:00:01.142) 0:36:44.514 **** 2026-02-18 06:28:18.487811 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487819 | orchestrator | 2026-02-18 06:28:18.487828 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:28:18.487836 | orchestrator | Wednesday 18 February 2026 06:27:56 +0000 (0:00:01.281) 0:36:45.796 **** 2026-02-18 06:28:18.487845 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487854 | orchestrator | 2026-02-18 06:28:18.487862 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:28:18.487871 | orchestrator | Wednesday 18 February 2026 06:27:58 +0000 (0:00:01.224) 0:36:47.020 **** 2026-02-18 06:28:18.487880 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.487888 | orchestrator | 2026-02-18 06:28:18.487897 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:28:18.487905 | orchestrator | Wednesday 18 February 2026 06:27:59 +0000 (0:00:01.183) 0:36:48.204 **** 2026-02-18 06:28:18.487914 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487923 | orchestrator | 2026-02-18 06:28:18.487938 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:28:18.487946 | orchestrator | Wednesday 18 February 2026 06:28:00 +0000 (0:00:01.121) 0:36:49.326 **** 2026-02-18 06:28:18.487955 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487964 | orchestrator | 2026-02-18 06:28:18.487972 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:28:18.487981 | orchestrator | Wednesday 18 February 2026 06:28:01 +0000 (0:00:01.154) 0:36:50.481 **** 2026-02-18 06:28:18.487990 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.487998 | orchestrator | 2026-02-18 06:28:18.488007 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:28:18.488016 | orchestrator | Wednesday 18 February 2026 06:28:02 +0000 (0:00:01.142) 0:36:51.624 **** 2026-02-18 06:28:18.488024 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488033 | orchestrator | 2026-02-18 06:28:18.488042 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:28:18.488050 | orchestrator | Wednesday 18 February 2026 06:28:03 +0000 (0:00:01.139) 0:36:52.763 **** 2026-02-18 06:28:18.488060 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488075 | orchestrator | 2026-02-18 06:28:18.488090 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:28:18.488103 | orchestrator | Wednesday 18 February 2026 06:28:05 +0000 (0:00:01.138) 0:36:53.901 **** 2026-02-18 06:28:18.488116 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488130 | orchestrator | 2026-02-18 06:28:18.488143 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:28:18.488156 | orchestrator | Wednesday 18 February 2026 06:28:06 +0000 (0:00:01.165) 0:36:55.067 **** 2026-02-18 06:28:18.488169 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488183 | orchestrator | 2026-02-18 06:28:18.488198 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:28:18.488215 | orchestrator | Wednesday 18 February 2026 06:28:07 +0000 (0:00:01.109) 0:36:56.176 **** 2026-02-18 06:28:18.488229 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488242 | orchestrator | 2026-02-18 06:28:18.488250 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:28:18.488259 | orchestrator | Wednesday 18 February 2026 06:28:08 +0000 (0:00:01.142) 0:36:57.319 **** 2026-02-18 06:28:18.488267 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488276 | orchestrator | 2026-02-18 06:28:18.488284 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:28:18.488293 | orchestrator | Wednesday 18 February 2026 06:28:09 +0000 (0:00:01.193) 0:36:58.513 **** 2026-02-18 06:28:18.488302 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488310 | orchestrator | 2026-02-18 06:28:18.488319 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:28:18.488327 | orchestrator | Wednesday 18 February 2026 06:28:10 +0000 (0:00:01.142) 0:36:59.655 **** 2026-02-18 06:28:18.488335 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488344 | orchestrator | 2026-02-18 06:28:18.488377 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:28:18.488386 | orchestrator | Wednesday 18 February 2026 06:28:11 +0000 (0:00:01.146) 0:37:00.802 **** 2026-02-18 06:28:18.488395 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:28:18.488403 | orchestrator | 2026-02-18 06:28:18.488412 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:28:18.488421 | orchestrator | Wednesday 18 February 2026 06:28:13 +0000 (0:00:01.190) 0:37:01.992 **** 2026-02-18 06:28:18.488429 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.488438 | orchestrator | 2026-02-18 06:28:18.488447 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:28:18.488456 | orchestrator | Wednesday 18 February 2026 06:28:15 +0000 (0:00:01.938) 0:37:03.931 **** 2026-02-18 06:28:18.488464 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:28:18.488480 | orchestrator | 2026-02-18 06:28:18.488489 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:28:18.488498 | orchestrator | Wednesday 18 February 2026 06:28:17 +0000 (0:00:02.291) 0:37:06.223 **** 2026-02-18 06:28:18.488507 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-18 06:28:18.488515 | orchestrator | 2026-02-18 06:28:18.488530 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:29:06.117900 | orchestrator | Wednesday 18 February 2026 06:28:18 +0000 (0:00:01.123) 0:37:07.346 **** 2026-02-18 06:29:06.118006 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118066 | orchestrator | 2026-02-18 06:29:06.118077 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:29:06.118084 | orchestrator | Wednesday 18 February 2026 06:28:19 +0000 (0:00:01.128) 0:37:08.475 **** 2026-02-18 06:29:06.118091 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118098 | orchestrator | 2026-02-18 06:29:06.118105 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:29:06.118112 | orchestrator | Wednesday 18 February 2026 06:28:20 +0000 (0:00:01.147) 0:37:09.623 **** 2026-02-18 06:29:06.118119 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:29:06.118127 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:29:06.118135 | orchestrator | 2026-02-18 06:29:06.118143 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:29:06.118150 | orchestrator | Wednesday 18 February 2026 06:28:22 +0000 (0:00:01.838) 0:37:11.461 **** 2026-02-18 06:29:06.118158 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:06.118167 | orchestrator | 2026-02-18 06:29:06.118174 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:29:06.118181 | orchestrator | Wednesday 18 February 2026 06:28:24 +0000 (0:00:01.486) 0:37:12.948 **** 2026-02-18 06:29:06.118188 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118196 | orchestrator | 2026-02-18 06:29:06.118203 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:29:06.118211 | orchestrator | Wednesday 18 February 2026 06:28:25 +0000 (0:00:01.198) 0:37:14.146 **** 2026-02-18 06:29:06.118218 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118225 | orchestrator | 2026-02-18 06:29:06.118233 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:29:06.118240 | orchestrator | Wednesday 18 February 2026 06:28:26 +0000 (0:00:01.142) 0:37:15.288 **** 2026-02-18 06:29:06.118247 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118254 | orchestrator | 2026-02-18 06:29:06.118262 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:29:06.118269 | orchestrator | Wednesday 18 February 2026 06:28:27 +0000 (0:00:01.170) 0:37:16.459 **** 2026-02-18 06:29:06.118340 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-18 06:29:06.118350 | orchestrator | 2026-02-18 06:29:06.118358 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:29:06.118365 | orchestrator | Wednesday 18 February 2026 06:28:28 +0000 (0:00:01.140) 0:37:17.599 **** 2026-02-18 06:29:06.118372 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:06.118378 | orchestrator | 2026-02-18 06:29:06.118385 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:29:06.118392 | orchestrator | Wednesday 18 February 2026 06:28:30 +0000 (0:00:01.791) 0:37:19.391 **** 2026-02-18 06:29:06.118399 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:29:06.118406 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:29:06.118413 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:29:06.118420 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118451 | orchestrator | 2026-02-18 06:29:06.118534 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:29:06.118549 | orchestrator | Wednesday 18 February 2026 06:28:31 +0000 (0:00:01.276) 0:37:20.667 **** 2026-02-18 06:29:06.118556 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118563 | orchestrator | 2026-02-18 06:29:06.118570 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:29:06.118577 | orchestrator | Wednesday 18 February 2026 06:28:32 +0000 (0:00:01.138) 0:37:21.806 **** 2026-02-18 06:29:06.118584 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118591 | orchestrator | 2026-02-18 06:29:06.118599 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:29:06.118606 | orchestrator | Wednesday 18 February 2026 06:28:34 +0000 (0:00:01.242) 0:37:23.048 **** 2026-02-18 06:29:06.118613 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118621 | orchestrator | 2026-02-18 06:29:06.118628 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:29:06.118636 | orchestrator | Wednesday 18 February 2026 06:28:35 +0000 (0:00:01.158) 0:37:24.207 **** 2026-02-18 06:29:06.118643 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118650 | orchestrator | 2026-02-18 06:29:06.118657 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:29:06.118664 | orchestrator | Wednesday 18 February 2026 06:28:36 +0000 (0:00:01.175) 0:37:25.383 **** 2026-02-18 06:29:06.118671 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118678 | orchestrator | 2026-02-18 06:29:06.118686 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:29:06.118694 | orchestrator | Wednesday 18 February 2026 06:28:37 +0000 (0:00:01.177) 0:37:26.560 **** 2026-02-18 06:29:06.118701 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:06.118709 | orchestrator | 2026-02-18 06:29:06.118720 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:29:06.118728 | orchestrator | Wednesday 18 February 2026 06:28:40 +0000 (0:00:02.481) 0:37:29.042 **** 2026-02-18 06:29:06.118735 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:06.118741 | orchestrator | 2026-02-18 06:29:06.118749 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:29:06.118757 | orchestrator | Wednesday 18 February 2026 06:28:41 +0000 (0:00:01.149) 0:37:30.192 **** 2026-02-18 06:29:06.118764 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-18 06:29:06.118772 | orchestrator | 2026-02-18 06:29:06.118796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:29:06.118804 | orchestrator | Wednesday 18 February 2026 06:28:42 +0000 (0:00:01.273) 0:37:31.466 **** 2026-02-18 06:29:06.118812 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118819 | orchestrator | 2026-02-18 06:29:06.118826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:29:06.118834 | orchestrator | Wednesday 18 February 2026 06:28:43 +0000 (0:00:01.261) 0:37:32.727 **** 2026-02-18 06:29:06.118840 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118847 | orchestrator | 2026-02-18 06:29:06.118854 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:29:06.118861 | orchestrator | Wednesday 18 February 2026 06:28:45 +0000 (0:00:01.196) 0:37:33.923 **** 2026-02-18 06:29:06.118868 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118875 | orchestrator | 2026-02-18 06:29:06.118882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:29:06.118889 | orchestrator | Wednesday 18 February 2026 06:28:46 +0000 (0:00:01.187) 0:37:35.111 **** 2026-02-18 06:29:06.118896 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118903 | orchestrator | 2026-02-18 06:29:06.118910 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:29:06.118916 | orchestrator | Wednesday 18 February 2026 06:28:47 +0000 (0:00:01.269) 0:37:36.381 **** 2026-02-18 06:29:06.118931 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118938 | orchestrator | 2026-02-18 06:29:06.118945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:29:06.118953 | orchestrator | Wednesday 18 February 2026 06:28:48 +0000 (0:00:01.128) 0:37:37.509 **** 2026-02-18 06:29:06.118960 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118967 | orchestrator | 2026-02-18 06:29:06.118974 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:29:06.118981 | orchestrator | Wednesday 18 February 2026 06:28:49 +0000 (0:00:01.170) 0:37:38.680 **** 2026-02-18 06:29:06.118987 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.118994 | orchestrator | 2026-02-18 06:29:06.119001 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:29:06.119009 | orchestrator | Wednesday 18 February 2026 06:28:50 +0000 (0:00:01.190) 0:37:39.871 **** 2026-02-18 06:29:06.119016 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:06.119023 | orchestrator | 2026-02-18 06:29:06.119030 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:29:06.119038 | orchestrator | Wednesday 18 February 2026 06:28:52 +0000 (0:00:01.137) 0:37:41.009 **** 2026-02-18 06:29:06.119045 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:06.119052 | orchestrator | 2026-02-18 06:29:06.119059 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:29:06.119066 | orchestrator | Wednesday 18 February 2026 06:28:53 +0000 (0:00:01.168) 0:37:42.177 **** 2026-02-18 06:29:06.119074 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-18 06:29:06.119081 | orchestrator | 2026-02-18 06:29:06.119088 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:29:06.119095 | orchestrator | Wednesday 18 February 2026 06:28:54 +0000 (0:00:01.173) 0:37:43.350 **** 2026-02-18 06:29:06.119103 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-18 06:29:06.119110 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-18 06:29:06.119118 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-18 06:29:06.119125 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-18 06:29:06.119132 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-18 06:29:06.119139 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-18 06:29:06.119146 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-18 06:29:06.119153 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:29:06.119160 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:29:06.119167 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:29:06.119174 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:29:06.119182 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:29:06.119189 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:29:06.119196 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:29:06.119202 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-18 06:29:06.119209 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-18 06:29:06.119216 | orchestrator | 2026-02-18 06:29:06.119223 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:29:06.119230 | orchestrator | Wednesday 18 February 2026 06:29:01 +0000 (0:00:06.638) 0:37:49.988 **** 2026-02-18 06:29:06.119237 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-18 06:29:06.119244 | orchestrator | 2026-02-18 06:29:06.119251 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 06:29:06.119262 | orchestrator | Wednesday 18 February 2026 06:29:02 +0000 (0:00:01.496) 0:37:51.485 **** 2026-02-18 06:29:06.119305 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:29:06.119314 | orchestrator | 2026-02-18 06:29:06.119321 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 06:29:06.119328 | orchestrator | Wednesday 18 February 2026 06:29:04 +0000 (0:00:01.498) 0:37:52.984 **** 2026-02-18 06:29:06.119335 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:29:06.119342 | orchestrator | 2026-02-18 06:29:06.119355 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:29:57.306320 | orchestrator | Wednesday 18 February 2026 06:29:06 +0000 (0:00:01.994) 0:37:54.979 **** 2026-02-18 06:29:57.306439 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306457 | orchestrator | 2026-02-18 06:29:57.306470 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:29:57.306482 | orchestrator | Wednesday 18 February 2026 06:29:07 +0000 (0:00:01.145) 0:37:56.124 **** 2026-02-18 06:29:57.306492 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306504 | orchestrator | 2026-02-18 06:29:57.306515 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:29:57.306526 | orchestrator | Wednesday 18 February 2026 06:29:08 +0000 (0:00:01.168) 0:37:57.293 **** 2026-02-18 06:29:57.306538 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306549 | orchestrator | 2026-02-18 06:29:57.306559 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:29:57.306571 | orchestrator | Wednesday 18 February 2026 06:29:09 +0000 (0:00:01.187) 0:37:58.481 **** 2026-02-18 06:29:57.306581 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306592 | orchestrator | 2026-02-18 06:29:57.306603 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:29:57.306614 | orchestrator | Wednesday 18 February 2026 06:29:10 +0000 (0:00:01.115) 0:37:59.596 **** 2026-02-18 06:29:57.306625 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306636 | orchestrator | 2026-02-18 06:29:57.306647 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:29:57.306659 | orchestrator | Wednesday 18 February 2026 06:29:11 +0000 (0:00:01.166) 0:38:00.762 **** 2026-02-18 06:29:57.306670 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306681 | orchestrator | 2026-02-18 06:29:57.306692 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:29:57.306703 | orchestrator | Wednesday 18 February 2026 06:29:13 +0000 (0:00:01.129) 0:38:01.892 **** 2026-02-18 06:29:57.306714 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306725 | orchestrator | 2026-02-18 06:29:57.306736 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:29:57.306747 | orchestrator | Wednesday 18 February 2026 06:29:14 +0000 (0:00:01.163) 0:38:03.055 **** 2026-02-18 06:29:57.306758 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306769 | orchestrator | 2026-02-18 06:29:57.306780 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:29:57.306791 | orchestrator | Wednesday 18 February 2026 06:29:15 +0000 (0:00:01.137) 0:38:04.193 **** 2026-02-18 06:29:57.306801 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306812 | orchestrator | 2026-02-18 06:29:57.306825 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:29:57.306838 | orchestrator | Wednesday 18 February 2026 06:29:16 +0000 (0:00:01.158) 0:38:05.351 **** 2026-02-18 06:29:57.306850 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.306863 | orchestrator | 2026-02-18 06:29:57.306875 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:29:57.306888 | orchestrator | Wednesday 18 February 2026 06:29:17 +0000 (0:00:01.198) 0:38:06.550 **** 2026-02-18 06:29:57.306927 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:57.306941 | orchestrator | 2026-02-18 06:29:57.306954 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:29:57.306966 | orchestrator | Wednesday 18 February 2026 06:29:18 +0000 (0:00:01.233) 0:38:07.783 **** 2026-02-18 06:29:57.306979 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:29:57.306990 | orchestrator | 2026-02-18 06:29:57.307003 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:29:57.307016 | orchestrator | Wednesday 18 February 2026 06:29:23 +0000 (0:00:04.422) 0:38:12.206 **** 2026-02-18 06:29:57.307029 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:29:57.307042 | orchestrator | 2026-02-18 06:29:57.307057 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:29:57.307076 | orchestrator | Wednesday 18 February 2026 06:29:24 +0000 (0:00:01.233) 0:38:13.440 **** 2026-02-18 06:29:57.307097 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-18 06:29:57.307139 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-18 06:29:57.307162 | orchestrator | 2026-02-18 06:29:57.307180 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:29:57.307191 | orchestrator | Wednesday 18 February 2026 06:29:32 +0000 (0:00:07.466) 0:38:20.906 **** 2026-02-18 06:29:57.307228 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307243 | orchestrator | 2026-02-18 06:29:57.307255 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:29:57.307266 | orchestrator | Wednesday 18 February 2026 06:29:33 +0000 (0:00:01.128) 0:38:22.035 **** 2026-02-18 06:29:57.307277 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307288 | orchestrator | 2026-02-18 06:29:57.307315 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:29:57.307327 | orchestrator | Wednesday 18 February 2026 06:29:34 +0000 (0:00:01.183) 0:38:23.218 **** 2026-02-18 06:29:57.307338 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307349 | orchestrator | 2026-02-18 06:29:57.307360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:29:57.307371 | orchestrator | Wednesday 18 February 2026 06:29:35 +0000 (0:00:01.198) 0:38:24.417 **** 2026-02-18 06:29:57.307381 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307392 | orchestrator | 2026-02-18 06:29:57.307403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:29:57.307414 | orchestrator | Wednesday 18 February 2026 06:29:36 +0000 (0:00:01.163) 0:38:25.580 **** 2026-02-18 06:29:57.307425 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307436 | orchestrator | 2026-02-18 06:29:57.307447 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:29:57.307458 | orchestrator | Wednesday 18 February 2026 06:29:37 +0000 (0:00:01.174) 0:38:26.755 **** 2026-02-18 06:29:57.307469 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:57.307480 | orchestrator | 2026-02-18 06:29:57.307491 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:29:57.307502 | orchestrator | Wednesday 18 February 2026 06:29:39 +0000 (0:00:01.272) 0:38:28.028 **** 2026-02-18 06:29:57.307512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:29:57.307533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:29:57.307544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:29:57.307555 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307566 | orchestrator | 2026-02-18 06:29:57.307577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:29:57.307587 | orchestrator | Wednesday 18 February 2026 06:29:40 +0000 (0:00:01.780) 0:38:29.809 **** 2026-02-18 06:29:57.307598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:29:57.307609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:29:57.307620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:29:57.307631 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307641 | orchestrator | 2026-02-18 06:29:57.307652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:29:57.307663 | orchestrator | Wednesday 18 February 2026 06:29:42 +0000 (0:00:01.780) 0:38:31.590 **** 2026-02-18 06:29:57.307674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:29:57.307685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:29:57.307696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:29:57.307706 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.307717 | orchestrator | 2026-02-18 06:29:57.307728 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:29:57.307739 | orchestrator | Wednesday 18 February 2026 06:29:44 +0000 (0:00:01.928) 0:38:33.519 **** 2026-02-18 06:29:57.307750 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:57.307761 | orchestrator | 2026-02-18 06:29:57.307771 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:29:57.307782 | orchestrator | Wednesday 18 February 2026 06:29:45 +0000 (0:00:01.224) 0:38:34.744 **** 2026-02-18 06:29:57.307793 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:29:57.307804 | orchestrator | 2026-02-18 06:29:57.307815 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:29:57.307826 | orchestrator | Wednesday 18 February 2026 06:29:47 +0000 (0:00:01.391) 0:38:36.136 **** 2026-02-18 06:29:57.307836 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:57.307847 | orchestrator | 2026-02-18 06:29:57.307858 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-18 06:29:57.307868 | orchestrator | Wednesday 18 February 2026 06:29:49 +0000 (0:00:01.823) 0:38:37.959 **** 2026-02-18 06:29:57.307879 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:57.307890 | orchestrator | 2026-02-18 06:29:57.307901 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:29:57.307912 | orchestrator | Wednesday 18 February 2026 06:29:50 +0000 (0:00:01.129) 0:38:39.089 **** 2026-02-18 06:29:57.307923 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:29:57.307934 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:29:57.307945 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:29:57.307956 | orchestrator | 2026-02-18 06:29:57.307966 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-18 06:29:57.307977 | orchestrator | Wednesday 18 February 2026 06:29:51 +0000 (0:00:01.760) 0:38:40.849 **** 2026-02-18 06:29:57.307988 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-18 06:29:57.307999 | orchestrator | 2026-02-18 06:29:57.308010 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-18 06:29:57.308027 | orchestrator | Wednesday 18 February 2026 06:29:53 +0000 (0:00:01.458) 0:38:42.308 **** 2026-02-18 06:29:57.308038 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.308049 | orchestrator | 2026-02-18 06:29:57.308060 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-18 06:29:57.308077 | orchestrator | Wednesday 18 February 2026 06:29:54 +0000 (0:00:01.158) 0:38:43.466 **** 2026-02-18 06:29:57.308088 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:29:57.308099 | orchestrator | 2026-02-18 06:29:57.308110 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-18 06:29:57.308121 | orchestrator | Wednesday 18 February 2026 06:29:55 +0000 (0:00:01.201) 0:38:44.668 **** 2026-02-18 06:29:57.308131 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:29:57.308142 | orchestrator | 2026-02-18 06:29:57.308159 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-18 06:31:02.929982 | orchestrator | Wednesday 18 February 2026 06:29:57 +0000 (0:00:01.496) 0:38:46.164 **** 2026-02-18 06:31:02.930222 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:31:02.930251 | orchestrator | 2026-02-18 06:31:02.930268 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-18 06:31:02.930285 | orchestrator | Wednesday 18 February 2026 06:29:58 +0000 (0:00:01.204) 0:38:47.369 **** 2026-02-18 06:31:02.930301 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-18 06:31:02.930317 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-18 06:31:02.930333 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-18 06:31:02.930348 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-18 06:31:02.930362 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-18 06:31:02.930378 | orchestrator | 2026-02-18 06:31:02.930393 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-18 06:31:02.930408 | orchestrator | Wednesday 18 February 2026 06:30:01 +0000 (0:00:02.983) 0:38:50.352 **** 2026-02-18 06:31:02.930423 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.930434 | orchestrator | 2026-02-18 06:31:02.930442 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-18 06:31:02.930451 | orchestrator | Wednesday 18 February 2026 06:30:02 +0000 (0:00:01.198) 0:38:51.551 **** 2026-02-18 06:31:02.930460 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-18 06:31:02.930469 | orchestrator | 2026-02-18 06:31:02.930477 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-18 06:31:02.930486 | orchestrator | Wednesday 18 February 2026 06:30:04 +0000 (0:00:01.448) 0:38:52.999 **** 2026-02-18 06:31:02.930495 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-18 06:31:02.930503 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-18 06:31:02.930512 | orchestrator | 2026-02-18 06:31:02.930520 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-18 06:31:02.930531 | orchestrator | Wednesday 18 February 2026 06:30:05 +0000 (0:00:01.827) 0:38:54.827 **** 2026-02-18 06:31:02.930541 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:31:02.930551 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 06:31:02.930561 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:31:02.930571 | orchestrator | 2026-02-18 06:31:02.930581 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:31:02.930591 | orchestrator | Wednesday 18 February 2026 06:30:09 +0000 (0:00:03.277) 0:38:58.104 **** 2026-02-18 06:31:02.930601 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-18 06:31:02.930612 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 06:31:02.930622 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:31:02.930632 | orchestrator | 2026-02-18 06:31:02.930643 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-18 06:31:02.930653 | orchestrator | Wednesday 18 February 2026 06:30:11 +0000 (0:00:01.976) 0:39:00.081 **** 2026-02-18 06:31:02.930663 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.930699 | orchestrator | 2026-02-18 06:31:02.930710 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-18 06:31:02.930720 | orchestrator | Wednesday 18 February 2026 06:30:12 +0000 (0:00:01.230) 0:39:01.311 **** 2026-02-18 06:31:02.930730 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.930740 | orchestrator | 2026-02-18 06:31:02.930750 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-18 06:31:02.930760 | orchestrator | Wednesday 18 February 2026 06:30:13 +0000 (0:00:01.191) 0:39:02.503 **** 2026-02-18 06:31:02.930770 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.930780 | orchestrator | 2026-02-18 06:31:02.930790 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-18 06:31:02.930800 | orchestrator | Wednesday 18 February 2026 06:30:14 +0000 (0:00:01.150) 0:39:03.653 **** 2026-02-18 06:31:02.930811 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-18 06:31:02.930821 | orchestrator | 2026-02-18 06:31:02.930831 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-18 06:31:02.930841 | orchestrator | Wednesday 18 February 2026 06:30:16 +0000 (0:00:01.495) 0:39:05.148 **** 2026-02-18 06:31:02.930851 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:31:02.930861 | orchestrator | 2026-02-18 06:31:02.930871 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-18 06:31:02.930881 | orchestrator | Wednesday 18 February 2026 06:30:17 +0000 (0:00:01.569) 0:39:06.718 **** 2026-02-18 06:31:02.930890 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:31:02.930899 | orchestrator | 2026-02-18 06:31:02.930908 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-18 06:31:02.930916 | orchestrator | Wednesday 18 February 2026 06:30:21 +0000 (0:00:03.439) 0:39:10.158 **** 2026-02-18 06:31:02.930938 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-18 06:31:02.930947 | orchestrator | 2026-02-18 06:31:02.930956 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-18 06:31:02.930964 | orchestrator | Wednesday 18 February 2026 06:30:22 +0000 (0:00:01.519) 0:39:11.677 **** 2026-02-18 06:31:02.930973 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:31:02.930982 | orchestrator | 2026-02-18 06:31:02.930991 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-18 06:31:02.931000 | orchestrator | Wednesday 18 February 2026 06:30:24 +0000 (0:00:02.012) 0:39:13.690 **** 2026-02-18 06:31:02.931008 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:31:02.931017 | orchestrator | 2026-02-18 06:31:02.931025 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-18 06:31:02.931051 | orchestrator | Wednesday 18 February 2026 06:30:26 +0000 (0:00:01.971) 0:39:15.661 **** 2026-02-18 06:31:02.931060 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:31:02.931069 | orchestrator | 2026-02-18 06:31:02.931078 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-18 06:31:02.931086 | orchestrator | Wednesday 18 February 2026 06:30:29 +0000 (0:00:02.249) 0:39:17.911 **** 2026-02-18 06:31:02.931095 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.931104 | orchestrator | 2026-02-18 06:31:02.931113 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-18 06:31:02.931121 | orchestrator | Wednesday 18 February 2026 06:30:30 +0000 (0:00:01.194) 0:39:19.106 **** 2026-02-18 06:31:02.931185 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.931195 | orchestrator | 2026-02-18 06:31:02.931204 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-18 06:31:02.931212 | orchestrator | Wednesday 18 February 2026 06:30:31 +0000 (0:00:01.132) 0:39:20.238 **** 2026-02-18 06:31:02.931221 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-18 06:31:02.931230 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:31:02.931238 | orchestrator | 2026-02-18 06:31:02.931247 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-18 06:31:02.931265 | orchestrator | Wednesday 18 February 2026 06:30:33 +0000 (0:00:01.828) 0:39:22.066 **** 2026-02-18 06:31:02.931273 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-18 06:31:02.931282 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:31:02.931291 | orchestrator | 2026-02-18 06:31:02.931299 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-18 06:31:02.931308 | orchestrator | Wednesday 18 February 2026 06:30:36 +0000 (0:00:02.862) 0:39:24.928 **** 2026-02-18 06:31:02.931316 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-18 06:31:02.931325 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-18 06:31:02.931334 | orchestrator | 2026-02-18 06:31:02.931342 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-18 06:31:02.931351 | orchestrator | Wednesday 18 February 2026 06:30:40 +0000 (0:00:04.591) 0:39:29.519 **** 2026-02-18 06:31:02.931359 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.931368 | orchestrator | 2026-02-18 06:31:02.931377 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-18 06:31:02.931385 | orchestrator | Wednesday 18 February 2026 06:30:41 +0000 (0:00:01.308) 0:39:30.828 **** 2026-02-18 06:31:02.931394 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.931402 | orchestrator | 2026-02-18 06:31:02.931411 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-18 06:31:02.931420 | orchestrator | Wednesday 18 February 2026 06:30:43 +0000 (0:00:01.753) 0:39:32.582 **** 2026-02-18 06:31:02.931428 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.931437 | orchestrator | 2026-02-18 06:31:02.931445 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-18 06:31:02.931454 | orchestrator | Wednesday 18 February 2026 06:30:44 +0000 (0:00:01.283) 0:39:33.865 **** 2026-02-18 06:31:02.931463 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.931471 | orchestrator | 2026-02-18 06:31:02.931480 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-18 06:31:02.931489 | orchestrator | Wednesday 18 February 2026 06:30:46 +0000 (0:00:01.233) 0:39:35.099 **** 2026-02-18 06:31:02.931497 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:31:02.931506 | orchestrator | 2026-02-18 06:31:02.931515 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-18 06:31:02.931523 | orchestrator | Wednesday 18 February 2026 06:30:47 +0000 (0:00:01.136) 0:39:36.236 **** 2026-02-18 06:31:02.931532 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-18 06:31:02.931541 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-18 06:31:02.931550 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:31:02.931558 | orchestrator | 2026-02-18 06:31:02.931567 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-18 06:31:02.931575 | orchestrator | 2026-02-18 06:31:02.931584 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:31:02.931593 | orchestrator | Wednesday 18 February 2026 06:30:55 +0000 (0:00:07.987) 0:39:44.223 **** 2026-02-18 06:31:02.931601 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-18 06:31:02.931610 | orchestrator | 2026-02-18 06:31:02.931618 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:31:02.931626 | orchestrator | Wednesday 18 February 2026 06:30:56 +0000 (0:00:01.141) 0:39:45.364 **** 2026-02-18 06:31:02.931635 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:02.931644 | orchestrator | 2026-02-18 06:31:02.931652 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:31:02.931661 | orchestrator | Wednesday 18 February 2026 06:30:57 +0000 (0:00:01.452) 0:39:46.817 **** 2026-02-18 06:31:02.931669 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:02.931678 | orchestrator | 2026-02-18 06:31:02.931687 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:31:02.931706 | orchestrator | Wednesday 18 February 2026 06:30:59 +0000 (0:00:01.186) 0:39:48.004 **** 2026-02-18 06:31:02.931715 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:02.931723 | orchestrator | 2026-02-18 06:31:02.931732 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:31:02.931741 | orchestrator | Wednesday 18 February 2026 06:31:00 +0000 (0:00:01.424) 0:39:49.428 **** 2026-02-18 06:31:02.931749 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:02.931758 | orchestrator | 2026-02-18 06:31:02.931766 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:31:02.931775 | orchestrator | Wednesday 18 February 2026 06:31:01 +0000 (0:00:01.163) 0:39:50.592 **** 2026-02-18 06:31:02.931783 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:02.931792 | orchestrator | 2026-02-18 06:31:02.931807 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:31:27.920807 | orchestrator | Wednesday 18 February 2026 06:31:02 +0000 (0:00:01.199) 0:39:51.792 **** 2026-02-18 06:31:27.920920 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:27.920936 | orchestrator | 2026-02-18 06:31:27.920949 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:31:27.920962 | orchestrator | Wednesday 18 February 2026 06:31:04 +0000 (0:00:01.201) 0:39:52.994 **** 2026-02-18 06:31:27.920973 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:27.920985 | orchestrator | 2026-02-18 06:31:27.920996 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:31:27.921008 | orchestrator | Wednesday 18 February 2026 06:31:05 +0000 (0:00:01.171) 0:39:54.166 **** 2026-02-18 06:31:27.921018 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:27.921029 | orchestrator | 2026-02-18 06:31:27.921041 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:31:27.921052 | orchestrator | Wednesday 18 February 2026 06:31:06 +0000 (0:00:01.173) 0:39:55.340 **** 2026-02-18 06:31:27.921063 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:31:27.921074 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:31:27.921084 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:31:27.921095 | orchestrator | 2026-02-18 06:31:27.921144 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:31:27.921156 | orchestrator | Wednesday 18 February 2026 06:31:08 +0000 (0:00:01.695) 0:39:57.035 **** 2026-02-18 06:31:27.921167 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:27.921178 | orchestrator | 2026-02-18 06:31:27.921189 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:31:27.921200 | orchestrator | Wednesday 18 February 2026 06:31:09 +0000 (0:00:01.284) 0:39:58.320 **** 2026-02-18 06:31:27.921211 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:31:27.921221 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:31:27.921232 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:31:27.921243 | orchestrator | 2026-02-18 06:31:27.921254 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:31:27.921265 | orchestrator | Wednesday 18 February 2026 06:31:12 +0000 (0:00:03.028) 0:40:01.348 **** 2026-02-18 06:31:27.921276 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 06:31:27.921287 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 06:31:27.921298 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 06:31:27.921309 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:27.921319 | orchestrator | 2026-02-18 06:31:27.921331 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:31:27.921344 | orchestrator | Wednesday 18 February 2026 06:31:13 +0000 (0:00:01.469) 0:40:02.818 **** 2026-02-18 06:31:27.921382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:31:27.921398 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:31:27.921411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:31:27.921424 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:27.921436 | orchestrator | 2026-02-18 06:31:27.921449 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:31:27.921461 | orchestrator | Wednesday 18 February 2026 06:31:16 +0000 (0:00:02.180) 0:40:04.999 **** 2026-02-18 06:31:27.921476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:27.921510 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:27.921542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:27.921555 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:27.921567 | orchestrator | 2026-02-18 06:31:27.921580 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:31:27.921592 | orchestrator | Wednesday 18 February 2026 06:31:17 +0000 (0:00:01.175) 0:40:06.174 **** 2026-02-18 06:31:27.921606 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:31:10.004383', 'end': '2026-02-18 06:31:10.050286', 'delta': '0:00:00.045903', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:31:27.921623 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:31:10.629702', 'end': '2026-02-18 06:31:10.680569', 'delta': '0:00:00.050867', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:31:27.921645 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:31:11.179311', 'end': '2026-02-18 06:31:11.233178', 'delta': '0:00:00.053867', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:31:27.921657 | orchestrator | 2026-02-18 06:31:27.921670 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:31:27.921682 | orchestrator | Wednesday 18 February 2026 06:31:18 +0000 (0:00:01.249) 0:40:07.424 **** 2026-02-18 06:31:27.921693 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:27.921704 | orchestrator | 2026-02-18 06:31:27.921715 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:31:27.921725 | orchestrator | Wednesday 18 February 2026 06:31:19 +0000 (0:00:01.264) 0:40:08.688 **** 2026-02-18 06:31:27.921736 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:27.921747 | orchestrator | 2026-02-18 06:31:27.921758 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:31:27.921769 | orchestrator | Wednesday 18 February 2026 06:31:21 +0000 (0:00:01.273) 0:40:09.962 **** 2026-02-18 06:31:27.921780 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:27.921791 | orchestrator | 2026-02-18 06:31:27.921801 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:31:27.921812 | orchestrator | Wednesday 18 February 2026 06:31:22 +0000 (0:00:01.199) 0:40:11.161 **** 2026-02-18 06:31:27.921823 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:31:27.921834 | orchestrator | 2026-02-18 06:31:27.921851 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:31:27.921862 | orchestrator | Wednesday 18 February 2026 06:31:24 +0000 (0:00:02.016) 0:40:13.178 **** 2026-02-18 06:31:27.921873 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:27.921883 | orchestrator | 2026-02-18 06:31:27.921894 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:31:27.921905 | orchestrator | Wednesday 18 February 2026 06:31:25 +0000 (0:00:01.183) 0:40:14.361 **** 2026-02-18 06:31:27.921916 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:27.921927 | orchestrator | 2026-02-18 06:31:27.921937 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:31:27.921964 | orchestrator | Wednesday 18 February 2026 06:31:26 +0000 (0:00:01.140) 0:40:15.501 **** 2026-02-18 06:31:27.921982 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:36.589399 | orchestrator | 2026-02-18 06:31:36.589503 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:31:36.589519 | orchestrator | Wednesday 18 February 2026 06:31:27 +0000 (0:00:01.284) 0:40:16.786 **** 2026-02-18 06:31:36.589531 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:36.589543 | orchestrator | 2026-02-18 06:31:36.589553 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:31:36.589563 | orchestrator | Wednesday 18 February 2026 06:31:29 +0000 (0:00:01.144) 0:40:17.931 **** 2026-02-18 06:31:36.589574 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:36.589583 | orchestrator | 2026-02-18 06:31:36.589593 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:31:36.589624 | orchestrator | Wednesday 18 February 2026 06:31:30 +0000 (0:00:01.211) 0:40:19.142 **** 2026-02-18 06:31:36.589635 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:36.589645 | orchestrator | 2026-02-18 06:31:36.589655 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:31:36.589665 | orchestrator | Wednesday 18 February 2026 06:31:31 +0000 (0:00:01.198) 0:40:20.341 **** 2026-02-18 06:31:36.589675 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:36.589685 | orchestrator | 2026-02-18 06:31:36.589695 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:31:36.589705 | orchestrator | Wednesday 18 February 2026 06:31:32 +0000 (0:00:01.161) 0:40:21.503 **** 2026-02-18 06:31:36.589715 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:36.589724 | orchestrator | 2026-02-18 06:31:36.589734 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:31:36.589744 | orchestrator | Wednesday 18 February 2026 06:31:33 +0000 (0:00:01.200) 0:40:22.704 **** 2026-02-18 06:31:36.589754 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:36.589763 | orchestrator | 2026-02-18 06:31:36.589773 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:31:36.589784 | orchestrator | Wednesday 18 February 2026 06:31:35 +0000 (0:00:01.206) 0:40:23.910 **** 2026-02-18 06:31:36.589794 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:31:36.589803 | orchestrator | 2026-02-18 06:31:36.589813 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:31:36.589823 | orchestrator | Wednesday 18 February 2026 06:31:36 +0000 (0:00:01.227) 0:40:25.138 **** 2026-02-18 06:31:36.589835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:36.589848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}})  2026-02-18 06:31:36.589862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:31:36.589903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}})  2026-02-18 06:31:36.589927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:36.589938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:36.589951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:31:36.589963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:36.589975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:31:36.589987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:36.590004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}})  2026-02-18 06:31:36.590117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}})  2026-02-18 06:31:37.978987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:37.979085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:31:37.979176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:37.979204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:31:37.979235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:31:37.979246 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:31:37.979256 | orchestrator | 2026-02-18 06:31:37.979279 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:31:37.979291 | orchestrator | Wednesday 18 February 2026 06:31:37 +0000 (0:00:01.462) 0:40:26.600 **** 2026-02-18 06:31:37.979302 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:37.979312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:37.979322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:37.979337 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:37.979352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:37.979368 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299709 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299782 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299811 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299825 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:31:39.299883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:32:15.187441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:32:15.187516 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187525 | orchestrator | 2026-02-18 06:32:15.187531 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:32:15.187536 | orchestrator | Wednesday 18 February 2026 06:31:39 +0000 (0:00:01.565) 0:40:28.166 **** 2026-02-18 06:32:15.187539 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.187544 | orchestrator | 2026-02-18 06:32:15.187548 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:32:15.187552 | orchestrator | Wednesday 18 February 2026 06:31:40 +0000 (0:00:01.493) 0:40:29.660 **** 2026-02-18 06:32:15.187570 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.187574 | orchestrator | 2026-02-18 06:32:15.187578 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:32:15.187581 | orchestrator | Wednesday 18 February 2026 06:31:41 +0000 (0:00:01.203) 0:40:30.863 **** 2026-02-18 06:32:15.187585 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.187589 | orchestrator | 2026-02-18 06:32:15.187592 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:32:15.187596 | orchestrator | Wednesday 18 February 2026 06:31:43 +0000 (0:00:01.456) 0:40:32.320 **** 2026-02-18 06:32:15.187600 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187604 | orchestrator | 2026-02-18 06:32:15.187608 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:32:15.187612 | orchestrator | Wednesday 18 February 2026 06:31:44 +0000 (0:00:01.243) 0:40:33.563 **** 2026-02-18 06:32:15.187615 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187619 | orchestrator | 2026-02-18 06:32:15.187623 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:32:15.187627 | orchestrator | Wednesday 18 February 2026 06:31:45 +0000 (0:00:01.304) 0:40:34.868 **** 2026-02-18 06:32:15.187631 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187634 | orchestrator | 2026-02-18 06:32:15.187646 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:32:15.187650 | orchestrator | Wednesday 18 February 2026 06:31:47 +0000 (0:00:01.165) 0:40:36.033 **** 2026-02-18 06:32:15.187654 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-18 06:32:15.187658 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-18 06:32:15.187662 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-18 06:32:15.187665 | orchestrator | 2026-02-18 06:32:15.187669 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:32:15.187673 | orchestrator | Wednesday 18 February 2026 06:31:48 +0000 (0:00:01.695) 0:40:37.729 **** 2026-02-18 06:32:15.187677 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 06:32:15.187681 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 06:32:15.187684 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 06:32:15.187688 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187692 | orchestrator | 2026-02-18 06:32:15.187695 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:32:15.187699 | orchestrator | Wednesday 18 February 2026 06:31:50 +0000 (0:00:01.246) 0:40:38.976 **** 2026-02-18 06:32:15.187703 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-18 06:32:15.187707 | orchestrator | 2026-02-18 06:32:15.187712 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:32:15.187717 | orchestrator | Wednesday 18 February 2026 06:31:51 +0000 (0:00:01.148) 0:40:40.125 **** 2026-02-18 06:32:15.187720 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187724 | orchestrator | 2026-02-18 06:32:15.187728 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:32:15.187731 | orchestrator | Wednesday 18 February 2026 06:31:52 +0000 (0:00:01.135) 0:40:41.260 **** 2026-02-18 06:32:15.187735 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187739 | orchestrator | 2026-02-18 06:32:15.187743 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:32:15.187746 | orchestrator | Wednesday 18 February 2026 06:31:53 +0000 (0:00:01.141) 0:40:42.402 **** 2026-02-18 06:32:15.187750 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187754 | orchestrator | 2026-02-18 06:32:15.187758 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:32:15.187761 | orchestrator | Wednesday 18 February 2026 06:31:54 +0000 (0:00:01.198) 0:40:43.600 **** 2026-02-18 06:32:15.187769 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.187773 | orchestrator | 2026-02-18 06:32:15.187777 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:32:15.187780 | orchestrator | Wednesday 18 February 2026 06:31:55 +0000 (0:00:01.252) 0:40:44.853 **** 2026-02-18 06:32:15.187784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:32:15.187798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:32:15.187802 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:32:15.187806 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187810 | orchestrator | 2026-02-18 06:32:15.187813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:32:15.187817 | orchestrator | Wednesday 18 February 2026 06:31:57 +0000 (0:00:01.436) 0:40:46.289 **** 2026-02-18 06:32:15.187821 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:32:15.187825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:32:15.187828 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:32:15.187832 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187836 | orchestrator | 2026-02-18 06:32:15.187839 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:32:15.187843 | orchestrator | Wednesday 18 February 2026 06:31:58 +0000 (0:00:01.415) 0:40:47.705 **** 2026-02-18 06:32:15.187847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:32:15.187851 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:32:15.187854 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:32:15.187858 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:15.187862 | orchestrator | 2026-02-18 06:32:15.187865 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:32:15.187869 | orchestrator | Wednesday 18 February 2026 06:32:00 +0000 (0:00:01.496) 0:40:49.201 **** 2026-02-18 06:32:15.187873 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.187876 | orchestrator | 2026-02-18 06:32:15.187880 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:32:15.187884 | orchestrator | Wednesday 18 February 2026 06:32:01 +0000 (0:00:01.150) 0:40:50.352 **** 2026-02-18 06:32:15.187887 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 06:32:15.187891 | orchestrator | 2026-02-18 06:32:15.187895 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:32:15.187899 | orchestrator | Wednesday 18 February 2026 06:32:02 +0000 (0:00:01.333) 0:40:51.686 **** 2026-02-18 06:32:15.187902 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:32:15.187906 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:32:15.187910 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:32:15.187913 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:32:15.187917 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-18 06:32:15.187921 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:32:15.187927 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:32:15.187931 | orchestrator | 2026-02-18 06:32:15.187934 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:32:15.187938 | orchestrator | Wednesday 18 February 2026 06:32:04 +0000 (0:00:02.160) 0:40:53.847 **** 2026-02-18 06:32:15.187942 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:32:15.187945 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:32:15.187949 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:32:15.187956 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:32:15.187960 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-18 06:32:15.187964 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:32:15.187968 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:32:15.187971 | orchestrator | 2026-02-18 06:32:15.187975 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-18 06:32:15.187979 | orchestrator | Wednesday 18 February 2026 06:32:07 +0000 (0:00:02.296) 0:40:56.143 **** 2026-02-18 06:32:15.187982 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.187986 | orchestrator | 2026-02-18 06:32:15.187990 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-18 06:32:15.187994 | orchestrator | Wednesday 18 February 2026 06:32:08 +0000 (0:00:01.121) 0:40:57.265 **** 2026-02-18 06:32:15.187997 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.188001 | orchestrator | 2026-02-18 06:32:15.188005 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-18 06:32:15.188008 | orchestrator | Wednesday 18 February 2026 06:32:09 +0000 (0:00:00.796) 0:40:58.061 **** 2026-02-18 06:32:15.188012 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:15.188016 | orchestrator | 2026-02-18 06:32:15.188019 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-18 06:32:15.188023 | orchestrator | Wednesday 18 February 2026 06:32:10 +0000 (0:00:00.921) 0:40:58.982 **** 2026-02-18 06:32:15.188027 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-18 06:32:15.188030 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-18 06:32:15.188034 | orchestrator | 2026-02-18 06:32:15.188038 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:32:15.188041 | orchestrator | Wednesday 18 February 2026 06:32:13 +0000 (0:00:03.751) 0:41:02.734 **** 2026-02-18 06:32:15.188045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-18 06:32:15.188049 | orchestrator | 2026-02-18 06:32:15.188081 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:32:15.188089 | orchestrator | Wednesday 18 February 2026 06:32:15 +0000 (0:00:01.316) 0:41:04.051 **** 2026-02-18 06:32:58.015913 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-18 06:32:58.016077 | orchestrator | 2026-02-18 06:32:58.016099 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:32:58.016114 | orchestrator | Wednesday 18 February 2026 06:32:16 +0000 (0:00:01.167) 0:41:05.218 **** 2026-02-18 06:32:58.016126 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.016138 | orchestrator | 2026-02-18 06:32:58.016150 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:32:58.016162 | orchestrator | Wednesday 18 February 2026 06:32:17 +0000 (0:00:01.137) 0:41:06.356 **** 2026-02-18 06:32:58.016177 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016198 | orchestrator | 2026-02-18 06:32:58.016217 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:32:58.016236 | orchestrator | Wednesday 18 February 2026 06:32:19 +0000 (0:00:01.536) 0:41:07.893 **** 2026-02-18 06:32:58.016255 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016273 | orchestrator | 2026-02-18 06:32:58.016291 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:32:58.016311 | orchestrator | Wednesday 18 February 2026 06:32:20 +0000 (0:00:01.575) 0:41:09.468 **** 2026-02-18 06:32:58.016330 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016350 | orchestrator | 2026-02-18 06:32:58.016370 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:32:58.016391 | orchestrator | Wednesday 18 February 2026 06:32:22 +0000 (0:00:01.533) 0:41:11.002 **** 2026-02-18 06:32:58.016446 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.016468 | orchestrator | 2026-02-18 06:32:58.016489 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:32:58.016510 | orchestrator | Wednesday 18 February 2026 06:32:23 +0000 (0:00:01.116) 0:41:12.118 **** 2026-02-18 06:32:58.016531 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.016553 | orchestrator | 2026-02-18 06:32:58.016573 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:32:58.016589 | orchestrator | Wednesday 18 February 2026 06:32:24 +0000 (0:00:01.184) 0:41:13.302 **** 2026-02-18 06:32:58.016602 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.016615 | orchestrator | 2026-02-18 06:32:58.016628 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:32:58.016641 | orchestrator | Wednesday 18 February 2026 06:32:25 +0000 (0:00:01.149) 0:41:14.452 **** 2026-02-18 06:32:58.016654 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016665 | orchestrator | 2026-02-18 06:32:58.016675 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:32:58.016686 | orchestrator | Wednesday 18 February 2026 06:32:27 +0000 (0:00:01.553) 0:41:16.006 **** 2026-02-18 06:32:58.016697 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016708 | orchestrator | 2026-02-18 06:32:58.016719 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:32:58.016746 | orchestrator | Wednesday 18 February 2026 06:32:28 +0000 (0:00:01.521) 0:41:17.528 **** 2026-02-18 06:32:58.016758 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.016769 | orchestrator | 2026-02-18 06:32:58.016780 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:32:58.016791 | orchestrator | Wednesday 18 February 2026 06:32:29 +0000 (0:00:00.868) 0:41:18.396 **** 2026-02-18 06:32:58.016801 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.016812 | orchestrator | 2026-02-18 06:32:58.016823 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:32:58.016834 | orchestrator | Wednesday 18 February 2026 06:32:30 +0000 (0:00:00.812) 0:41:19.209 **** 2026-02-18 06:32:58.016845 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016856 | orchestrator | 2026-02-18 06:32:58.016867 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:32:58.016877 | orchestrator | Wednesday 18 February 2026 06:32:31 +0000 (0:00:00.845) 0:41:20.055 **** 2026-02-18 06:32:58.016888 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016899 | orchestrator | 2026-02-18 06:32:58.016910 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:32:58.016922 | orchestrator | Wednesday 18 February 2026 06:32:31 +0000 (0:00:00.806) 0:41:20.861 **** 2026-02-18 06:32:58.016933 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.016943 | orchestrator | 2026-02-18 06:32:58.016954 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:32:58.016965 | orchestrator | Wednesday 18 February 2026 06:32:32 +0000 (0:00:00.780) 0:41:21.642 **** 2026-02-18 06:32:58.016976 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.016987 | orchestrator | 2026-02-18 06:32:58.016998 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:32:58.017009 | orchestrator | Wednesday 18 February 2026 06:32:33 +0000 (0:00:00.789) 0:41:22.431 **** 2026-02-18 06:32:58.017044 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017055 | orchestrator | 2026-02-18 06:32:58.017066 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:32:58.017077 | orchestrator | Wednesday 18 February 2026 06:32:34 +0000 (0:00:00.854) 0:41:23.286 **** 2026-02-18 06:32:58.017088 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017099 | orchestrator | 2026-02-18 06:32:58.017110 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:32:58.017121 | orchestrator | Wednesday 18 February 2026 06:32:35 +0000 (0:00:00.807) 0:41:24.094 **** 2026-02-18 06:32:58.017142 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.017153 | orchestrator | 2026-02-18 06:32:58.017164 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:32:58.017175 | orchestrator | Wednesday 18 February 2026 06:32:36 +0000 (0:00:00.797) 0:41:24.892 **** 2026-02-18 06:32:58.017186 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.017197 | orchestrator | 2026-02-18 06:32:58.017207 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:32:58.017218 | orchestrator | Wednesday 18 February 2026 06:32:36 +0000 (0:00:00.798) 0:41:25.690 **** 2026-02-18 06:32:58.017229 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017240 | orchestrator | 2026-02-18 06:32:58.017271 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:32:58.017283 | orchestrator | Wednesday 18 February 2026 06:32:37 +0000 (0:00:00.772) 0:41:26.462 **** 2026-02-18 06:32:58.017294 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017306 | orchestrator | 2026-02-18 06:32:58.017317 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:32:58.017328 | orchestrator | Wednesday 18 February 2026 06:32:38 +0000 (0:00:00.786) 0:41:27.248 **** 2026-02-18 06:32:58.017339 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017350 | orchestrator | 2026-02-18 06:32:58.017361 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:32:58.017372 | orchestrator | Wednesday 18 February 2026 06:32:39 +0000 (0:00:00.834) 0:41:28.083 **** 2026-02-18 06:32:58.017382 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017393 | orchestrator | 2026-02-18 06:32:58.017404 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:32:58.017415 | orchestrator | Wednesday 18 February 2026 06:32:39 +0000 (0:00:00.785) 0:41:28.869 **** 2026-02-18 06:32:58.017426 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017437 | orchestrator | 2026-02-18 06:32:58.017448 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:32:58.017459 | orchestrator | Wednesday 18 February 2026 06:32:40 +0000 (0:00:00.796) 0:41:29.665 **** 2026-02-18 06:32:58.017470 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017481 | orchestrator | 2026-02-18 06:32:58.017492 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:32:58.017503 | orchestrator | Wednesday 18 February 2026 06:32:41 +0000 (0:00:00.751) 0:41:30.417 **** 2026-02-18 06:32:58.017514 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017525 | orchestrator | 2026-02-18 06:32:58.017536 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:32:58.017548 | orchestrator | Wednesday 18 February 2026 06:32:42 +0000 (0:00:00.801) 0:41:31.219 **** 2026-02-18 06:32:58.017559 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017570 | orchestrator | 2026-02-18 06:32:58.017581 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:32:58.017592 | orchestrator | Wednesday 18 February 2026 06:32:43 +0000 (0:00:00.788) 0:41:32.008 **** 2026-02-18 06:32:58.017602 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017613 | orchestrator | 2026-02-18 06:32:58.017624 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:32:58.017635 | orchestrator | Wednesday 18 February 2026 06:32:43 +0000 (0:00:00.786) 0:41:32.795 **** 2026-02-18 06:32:58.017646 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017657 | orchestrator | 2026-02-18 06:32:58.017668 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:32:58.017679 | orchestrator | Wednesday 18 February 2026 06:32:44 +0000 (0:00:00.835) 0:41:33.631 **** 2026-02-18 06:32:58.017690 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017701 | orchestrator | 2026-02-18 06:32:58.017718 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:32:58.017729 | orchestrator | Wednesday 18 February 2026 06:32:45 +0000 (0:00:00.904) 0:41:34.535 **** 2026-02-18 06:32:58.017746 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017757 | orchestrator | 2026-02-18 06:32:58.017768 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:32:58.017779 | orchestrator | Wednesday 18 February 2026 06:32:46 +0000 (0:00:00.832) 0:41:35.368 **** 2026-02-18 06:32:58.017790 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.017801 | orchestrator | 2026-02-18 06:32:58.017812 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:32:58.017823 | orchestrator | Wednesday 18 February 2026 06:32:48 +0000 (0:00:01.597) 0:41:36.966 **** 2026-02-18 06:32:58.017834 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.017845 | orchestrator | 2026-02-18 06:32:58.017856 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:32:58.017867 | orchestrator | Wednesday 18 February 2026 06:32:49 +0000 (0:00:01.872) 0:41:38.838 **** 2026-02-18 06:32:58.017878 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-18 06:32:58.017889 | orchestrator | 2026-02-18 06:32:58.017900 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:32:58.017911 | orchestrator | Wednesday 18 February 2026 06:32:51 +0000 (0:00:01.259) 0:41:40.098 **** 2026-02-18 06:32:58.017923 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017934 | orchestrator | 2026-02-18 06:32:58.017945 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:32:58.017956 | orchestrator | Wednesday 18 February 2026 06:32:52 +0000 (0:00:01.135) 0:41:41.233 **** 2026-02-18 06:32:58.017966 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.017977 | orchestrator | 2026-02-18 06:32:58.017988 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:32:58.017999 | orchestrator | Wednesday 18 February 2026 06:32:53 +0000 (0:00:01.154) 0:41:42.388 **** 2026-02-18 06:32:58.018010 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:32:58.018100 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:32:58.018113 | orchestrator | 2026-02-18 06:32:58.018123 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:32:58.018134 | orchestrator | Wednesday 18 February 2026 06:32:55 +0000 (0:00:01.805) 0:41:44.193 **** 2026-02-18 06:32:58.018145 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:32:58.018156 | orchestrator | 2026-02-18 06:32:58.018167 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:32:58.018189 | orchestrator | Wednesday 18 February 2026 06:32:56 +0000 (0:00:01.479) 0:41:45.673 **** 2026-02-18 06:32:58.018200 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:32:58.018211 | orchestrator | 2026-02-18 06:32:58.018222 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:32:58.018244 | orchestrator | Wednesday 18 February 2026 06:32:58 +0000 (0:00:01.207) 0:41:46.881 **** 2026-02-18 06:33:40.842962 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.843141 | orchestrator | 2026-02-18 06:33:40.843169 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:33:40.843191 | orchestrator | Wednesday 18 February 2026 06:32:58 +0000 (0:00:00.825) 0:41:47.706 **** 2026-02-18 06:33:40.843212 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.843232 | orchestrator | 2026-02-18 06:33:40.843252 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:33:40.843271 | orchestrator | Wednesday 18 February 2026 06:32:59 +0000 (0:00:00.835) 0:41:48.542 **** 2026-02-18 06:33:40.843292 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-18 06:33:40.843312 | orchestrator | 2026-02-18 06:33:40.843332 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:33:40.843352 | orchestrator | Wednesday 18 February 2026 06:33:00 +0000 (0:00:01.155) 0:41:49.697 **** 2026-02-18 06:33:40.843403 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:33:40.843423 | orchestrator | 2026-02-18 06:33:40.843443 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:33:40.843463 | orchestrator | Wednesday 18 February 2026 06:33:02 +0000 (0:00:01.787) 0:41:51.485 **** 2026-02-18 06:33:40.843483 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:33:40.843504 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:33:40.843526 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:33:40.843547 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.843568 | orchestrator | 2026-02-18 06:33:40.843590 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:33:40.843611 | orchestrator | Wednesday 18 February 2026 06:33:03 +0000 (0:00:01.191) 0:41:52.677 **** 2026-02-18 06:33:40.843632 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.843653 | orchestrator | 2026-02-18 06:33:40.843674 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:33:40.843695 | orchestrator | Wednesday 18 February 2026 06:33:04 +0000 (0:00:01.162) 0:41:53.840 **** 2026-02-18 06:33:40.843717 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.843737 | orchestrator | 2026-02-18 06:33:40.843759 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:33:40.843780 | orchestrator | Wednesday 18 February 2026 06:33:06 +0000 (0:00:01.579) 0:41:55.419 **** 2026-02-18 06:33:40.843802 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.843823 | orchestrator | 2026-02-18 06:33:40.843844 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:33:40.843864 | orchestrator | Wednesday 18 February 2026 06:33:07 +0000 (0:00:01.137) 0:41:56.557 **** 2026-02-18 06:33:40.843884 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.843904 | orchestrator | 2026-02-18 06:33:40.843941 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:33:40.843961 | orchestrator | Wednesday 18 February 2026 06:33:08 +0000 (0:00:01.145) 0:41:57.702 **** 2026-02-18 06:33:40.843980 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844024 | orchestrator | 2026-02-18 06:33:40.844043 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:33:40.844062 | orchestrator | Wednesday 18 February 2026 06:33:09 +0000 (0:00:00.927) 0:41:58.630 **** 2026-02-18 06:33:40.844081 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:33:40.844100 | orchestrator | 2026-02-18 06:33:40.844119 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:33:40.844138 | orchestrator | Wednesday 18 February 2026 06:33:11 +0000 (0:00:02.120) 0:42:00.750 **** 2026-02-18 06:33:40.844156 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:33:40.844175 | orchestrator | 2026-02-18 06:33:40.844194 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:33:40.844213 | orchestrator | Wednesday 18 February 2026 06:33:12 +0000 (0:00:00.794) 0:42:01.546 **** 2026-02-18 06:33:40.844232 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-18 06:33:40.844250 | orchestrator | 2026-02-18 06:33:40.844269 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:33:40.844288 | orchestrator | Wednesday 18 February 2026 06:33:13 +0000 (0:00:01.155) 0:42:02.701 **** 2026-02-18 06:33:40.844306 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844324 | orchestrator | 2026-02-18 06:33:40.844343 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:33:40.844362 | orchestrator | Wednesday 18 February 2026 06:33:15 +0000 (0:00:01.381) 0:42:04.082 **** 2026-02-18 06:33:40.844381 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844400 | orchestrator | 2026-02-18 06:33:40.844418 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:33:40.844450 | orchestrator | Wednesday 18 February 2026 06:33:16 +0000 (0:00:01.188) 0:42:05.271 **** 2026-02-18 06:33:40.844469 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844488 | orchestrator | 2026-02-18 06:33:40.844507 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:33:40.844526 | orchestrator | Wednesday 18 February 2026 06:33:17 +0000 (0:00:01.155) 0:42:06.427 **** 2026-02-18 06:33:40.844544 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844563 | orchestrator | 2026-02-18 06:33:40.844582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:33:40.844598 | orchestrator | Wednesday 18 February 2026 06:33:18 +0000 (0:00:01.193) 0:42:07.621 **** 2026-02-18 06:33:40.844615 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844632 | orchestrator | 2026-02-18 06:33:40.844649 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:33:40.844664 | orchestrator | Wednesday 18 February 2026 06:33:19 +0000 (0:00:01.161) 0:42:08.782 **** 2026-02-18 06:33:40.844680 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844699 | orchestrator | 2026-02-18 06:33:40.844741 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:33:40.844761 | orchestrator | Wednesday 18 February 2026 06:33:21 +0000 (0:00:01.239) 0:42:10.022 **** 2026-02-18 06:33:40.844779 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844798 | orchestrator | 2026-02-18 06:33:40.844816 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:33:40.844835 | orchestrator | Wednesday 18 February 2026 06:33:22 +0000 (0:00:01.174) 0:42:11.196 **** 2026-02-18 06:33:40.844854 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.844872 | orchestrator | 2026-02-18 06:33:40.844890 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:33:40.844909 | orchestrator | Wednesday 18 February 2026 06:33:23 +0000 (0:00:01.155) 0:42:12.352 **** 2026-02-18 06:33:40.844927 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:33:40.844946 | orchestrator | 2026-02-18 06:33:40.844965 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:33:40.844983 | orchestrator | Wednesday 18 February 2026 06:33:24 +0000 (0:00:00.843) 0:42:13.196 **** 2026-02-18 06:33:40.845027 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-18 06:33:40.845046 | orchestrator | 2026-02-18 06:33:40.845065 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:33:40.845083 | orchestrator | Wednesday 18 February 2026 06:33:25 +0000 (0:00:01.110) 0:42:14.306 **** 2026-02-18 06:33:40.845102 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-18 06:33:40.845121 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-18 06:33:40.845140 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-18 06:33:40.845159 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-18 06:33:40.845177 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-18 06:33:40.845196 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-18 06:33:40.845215 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-18 06:33:40.845233 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:33:40.845252 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:33:40.845270 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:33:40.845289 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:33:40.845307 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:33:40.845326 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:33:40.845344 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:33:40.845363 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-18 06:33:40.845393 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-18 06:33:40.845412 | orchestrator | 2026-02-18 06:33:40.845438 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:33:40.845457 | orchestrator | Wednesday 18 February 2026 06:33:31 +0000 (0:00:06.210) 0:42:20.516 **** 2026-02-18 06:33:40.845476 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-18 06:33:40.845495 | orchestrator | 2026-02-18 06:33:40.845513 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 06:33:40.845572 | orchestrator | Wednesday 18 February 2026 06:33:32 +0000 (0:00:01.142) 0:42:21.659 **** 2026-02-18 06:33:40.845592 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:33:40.845612 | orchestrator | 2026-02-18 06:33:40.845630 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 06:33:40.845649 | orchestrator | Wednesday 18 February 2026 06:33:34 +0000 (0:00:01.514) 0:42:23.173 **** 2026-02-18 06:33:40.845668 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:33:40.845686 | orchestrator | 2026-02-18 06:33:40.845705 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:33:40.845724 | orchestrator | Wednesday 18 February 2026 06:33:35 +0000 (0:00:01.640) 0:42:24.814 **** 2026-02-18 06:33:40.845742 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.845761 | orchestrator | 2026-02-18 06:33:40.845780 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:33:40.845798 | orchestrator | Wednesday 18 February 2026 06:33:36 +0000 (0:00:00.806) 0:42:25.621 **** 2026-02-18 06:33:40.845817 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.845836 | orchestrator | 2026-02-18 06:33:40.845855 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:33:40.845874 | orchestrator | Wednesday 18 February 2026 06:33:37 +0000 (0:00:00.778) 0:42:26.400 **** 2026-02-18 06:33:40.845893 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.845912 | orchestrator | 2026-02-18 06:33:40.845931 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:33:40.845950 | orchestrator | Wednesday 18 February 2026 06:33:38 +0000 (0:00:00.780) 0:42:27.181 **** 2026-02-18 06:33:40.845968 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.846118 | orchestrator | 2026-02-18 06:33:40.846144 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:33:40.846163 | orchestrator | Wednesday 18 February 2026 06:33:39 +0000 (0:00:00.893) 0:42:28.075 **** 2026-02-18 06:33:40.846184 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.846202 | orchestrator | 2026-02-18 06:33:40.846221 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:33:40.846240 | orchestrator | Wednesday 18 February 2026 06:33:39 +0000 (0:00:00.796) 0:42:28.872 **** 2026-02-18 06:33:40.846259 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:33:40.846278 | orchestrator | 2026-02-18 06:33:40.846309 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:34:22.078394 | orchestrator | Wednesday 18 February 2026 06:33:40 +0000 (0:00:00.835) 0:42:29.707 **** 2026-02-18 06:34:22.078540 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.078566 | orchestrator | 2026-02-18 06:34:22.078587 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:34:22.078608 | orchestrator | Wednesday 18 February 2026 06:33:41 +0000 (0:00:00.787) 0:42:30.494 **** 2026-02-18 06:34:22.078626 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.078644 | orchestrator | 2026-02-18 06:34:22.078662 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:34:22.078680 | orchestrator | Wednesday 18 February 2026 06:33:42 +0000 (0:00:00.782) 0:42:31.276 **** 2026-02-18 06:34:22.078731 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.078749 | orchestrator | 2026-02-18 06:34:22.078767 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:34:22.078785 | orchestrator | Wednesday 18 February 2026 06:33:43 +0000 (0:00:00.818) 0:42:32.095 **** 2026-02-18 06:34:22.078802 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.078820 | orchestrator | 2026-02-18 06:34:22.078838 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:34:22.078855 | orchestrator | Wednesday 18 February 2026 06:33:44 +0000 (0:00:00.826) 0:42:32.921 **** 2026-02-18 06:34:22.078874 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:34:22.078893 | orchestrator | 2026-02-18 06:34:22.078913 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:34:22.078934 | orchestrator | Wednesday 18 February 2026 06:33:44 +0000 (0:00:00.897) 0:42:33.819 **** 2026-02-18 06:34:22.078954 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:34:22.079002 | orchestrator | 2026-02-18 06:34:22.079021 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:34:22.079040 | orchestrator | Wednesday 18 February 2026 06:33:48 +0000 (0:00:04.039) 0:42:37.859 **** 2026-02-18 06:34:22.079059 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:34:22.079081 | orchestrator | 2026-02-18 06:34:22.079099 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:34:22.079117 | orchestrator | Wednesday 18 February 2026 06:33:49 +0000 (0:00:00.889) 0:42:38.748 **** 2026-02-18 06:34:22.079158 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-18 06:34:22.079180 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-18 06:34:22.079199 | orchestrator | 2026-02-18 06:34:22.079219 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:34:22.079238 | orchestrator | Wednesday 18 February 2026 06:33:57 +0000 (0:00:07.450) 0:42:46.198 **** 2026-02-18 06:34:22.079256 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.079272 | orchestrator | 2026-02-18 06:34:22.079292 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:34:22.079311 | orchestrator | Wednesday 18 February 2026 06:33:58 +0000 (0:00:00.834) 0:42:47.033 **** 2026-02-18 06:34:22.079330 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.079349 | orchestrator | 2026-02-18 06:34:22.079368 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:34:22.079386 | orchestrator | Wednesday 18 February 2026 06:33:58 +0000 (0:00:00.784) 0:42:47.817 **** 2026-02-18 06:34:22.079405 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.079423 | orchestrator | 2026-02-18 06:34:22.079440 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:34:22.079459 | orchestrator | Wednesday 18 February 2026 06:33:59 +0000 (0:00:00.826) 0:42:48.645 **** 2026-02-18 06:34:22.079476 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.079494 | orchestrator | 2026-02-18 06:34:22.079512 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:34:22.079532 | orchestrator | Wednesday 18 February 2026 06:34:00 +0000 (0:00:00.879) 0:42:49.524 **** 2026-02-18 06:34:22.079565 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.079583 | orchestrator | 2026-02-18 06:34:22.079601 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:34:22.079620 | orchestrator | Wednesday 18 February 2026 06:34:01 +0000 (0:00:00.859) 0:42:50.384 **** 2026-02-18 06:34:22.079637 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:34:22.079655 | orchestrator | 2026-02-18 06:34:22.079671 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:34:22.079691 | orchestrator | Wednesday 18 February 2026 06:34:02 +0000 (0:00:00.897) 0:42:51.282 **** 2026-02-18 06:34:22.079709 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:34:22.079728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:34:22.079745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:34:22.079764 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.079782 | orchestrator | 2026-02-18 06:34:22.079799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:34:22.079842 | orchestrator | Wednesday 18 February 2026 06:34:03 +0000 (0:00:01.067) 0:42:52.350 **** 2026-02-18 06:34:22.079865 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:34:22.079883 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:34:22.079901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:34:22.079920 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.079941 | orchestrator | 2026-02-18 06:34:22.079995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:34:22.080015 | orchestrator | Wednesday 18 February 2026 06:34:04 +0000 (0:00:01.078) 0:42:53.428 **** 2026-02-18 06:34:22.080035 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:34:22.080054 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:34:22.080072 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:34:22.080090 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.080108 | orchestrator | 2026-02-18 06:34:22.080126 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:34:22.080145 | orchestrator | Wednesday 18 February 2026 06:34:05 +0000 (0:00:01.111) 0:42:54.540 **** 2026-02-18 06:34:22.080163 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:34:22.080182 | orchestrator | 2026-02-18 06:34:22.080200 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:34:22.080217 | orchestrator | Wednesday 18 February 2026 06:34:06 +0000 (0:00:00.806) 0:42:55.347 **** 2026-02-18 06:34:22.080235 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 06:34:22.080252 | orchestrator | 2026-02-18 06:34:22.080270 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:34:22.080289 | orchestrator | Wednesday 18 February 2026 06:34:07 +0000 (0:00:01.018) 0:42:56.365 **** 2026-02-18 06:34:22.080307 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:34:22.080323 | orchestrator | 2026-02-18 06:34:22.080341 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-18 06:34:22.080358 | orchestrator | Wednesday 18 February 2026 06:34:09 +0000 (0:00:01.593) 0:42:57.958 **** 2026-02-18 06:34:22.080375 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:34:22.080392 | orchestrator | 2026-02-18 06:34:22.080410 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:34:22.080428 | orchestrator | Wednesday 18 February 2026 06:34:09 +0000 (0:00:00.798) 0:42:58.757 **** 2026-02-18 06:34:22.080445 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:34:22.080465 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:34:22.080484 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:34:22.080501 | orchestrator | 2026-02-18 06:34:22.080531 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-18 06:34:22.080564 | orchestrator | Wednesday 18 February 2026 06:34:11 +0000 (0:00:01.366) 0:43:00.123 **** 2026-02-18 06:34:22.080582 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-18 06:34:22.080602 | orchestrator | 2026-02-18 06:34:22.080619 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-18 06:34:22.080638 | orchestrator | Wednesday 18 February 2026 06:34:12 +0000 (0:00:01.153) 0:43:01.276 **** 2026-02-18 06:34:22.080655 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.080674 | orchestrator | 2026-02-18 06:34:22.080692 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-18 06:34:22.080710 | orchestrator | Wednesday 18 February 2026 06:34:13 +0000 (0:00:01.217) 0:43:02.493 **** 2026-02-18 06:34:22.080727 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.080745 | orchestrator | 2026-02-18 06:34:22.080764 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-18 06:34:22.080782 | orchestrator | Wednesday 18 February 2026 06:34:14 +0000 (0:00:01.261) 0:43:03.755 **** 2026-02-18 06:34:22.080799 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:34:22.080817 | orchestrator | 2026-02-18 06:34:22.080834 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-18 06:34:22.080851 | orchestrator | Wednesday 18 February 2026 06:34:16 +0000 (0:00:01.522) 0:43:05.278 **** 2026-02-18 06:34:22.080868 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:34:22.080885 | orchestrator | 2026-02-18 06:34:22.080903 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-18 06:34:22.080921 | orchestrator | Wednesday 18 February 2026 06:34:17 +0000 (0:00:01.173) 0:43:06.452 **** 2026-02-18 06:34:22.080939 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-18 06:34:22.081025 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-18 06:34:22.081049 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-18 06:34:22.081068 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-18 06:34:22.081088 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-18 06:34:22.081104 | orchestrator | 2026-02-18 06:34:22.081122 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-18 06:34:22.081137 | orchestrator | Wednesday 18 February 2026 06:34:20 +0000 (0:00:02.536) 0:43:08.989 **** 2026-02-18 06:34:22.081155 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:34:22.081171 | orchestrator | 2026-02-18 06:34:22.081218 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-18 06:34:22.081235 | orchestrator | Wednesday 18 February 2026 06:34:20 +0000 (0:00:00.778) 0:43:09.768 **** 2026-02-18 06:34:22.081251 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-18 06:34:22.081267 | orchestrator | 2026-02-18 06:34:22.081283 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-18 06:35:28.064632 | orchestrator | Wednesday 18 February 2026 06:34:22 +0000 (0:00:01.173) 0:43:10.941 **** 2026-02-18 06:35:28.064777 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-18 06:35:28.064805 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-18 06:35:28.064823 | orchestrator | 2026-02-18 06:35:28.064843 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-18 06:35:28.064862 | orchestrator | Wednesday 18 February 2026 06:34:23 +0000 (0:00:01.826) 0:43:12.768 **** 2026-02-18 06:35:28.064882 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:35:28.064900 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 06:35:28.064916 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:35:28.064965 | orchestrator | 2026-02-18 06:35:28.065005 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:35:28.065016 | orchestrator | Wednesday 18 February 2026 06:34:27 +0000 (0:00:03.479) 0:43:16.248 **** 2026-02-18 06:35:28.065028 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-18 06:35:28.065039 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 06:35:28.065050 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:35:28.065061 | orchestrator | 2026-02-18 06:35:28.065072 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-18 06:35:28.065082 | orchestrator | Wednesday 18 February 2026 06:34:29 +0000 (0:00:01.664) 0:43:17.912 **** 2026-02-18 06:35:28.065093 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.065104 | orchestrator | 2026-02-18 06:35:28.065115 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-18 06:35:28.065126 | orchestrator | Wednesday 18 February 2026 06:34:29 +0000 (0:00:00.900) 0:43:18.812 **** 2026-02-18 06:35:28.065140 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.065152 | orchestrator | 2026-02-18 06:35:28.065165 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-18 06:35:28.065178 | orchestrator | Wednesday 18 February 2026 06:34:30 +0000 (0:00:00.778) 0:43:19.591 **** 2026-02-18 06:35:28.065191 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.065203 | orchestrator | 2026-02-18 06:35:28.065215 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-18 06:35:28.065228 | orchestrator | Wednesday 18 February 2026 06:34:31 +0000 (0:00:00.790) 0:43:20.382 **** 2026-02-18 06:35:28.065241 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-18 06:35:28.065253 | orchestrator | 2026-02-18 06:35:28.065264 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-18 06:35:28.065275 | orchestrator | Wednesday 18 February 2026 06:34:32 +0000 (0:00:01.109) 0:43:21.491 **** 2026-02-18 06:35:28.065300 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:35:28.065312 | orchestrator | 2026-02-18 06:35:28.065323 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-18 06:35:28.065334 | orchestrator | Wednesday 18 February 2026 06:34:34 +0000 (0:00:01.461) 0:43:22.952 **** 2026-02-18 06:35:28.065345 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:35:28.065355 | orchestrator | 2026-02-18 06:35:28.065366 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-18 06:35:28.065377 | orchestrator | Wednesday 18 February 2026 06:34:37 +0000 (0:00:03.413) 0:43:26.366 **** 2026-02-18 06:35:28.065388 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-18 06:35:28.065399 | orchestrator | 2026-02-18 06:35:28.065410 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-18 06:35:28.065420 | orchestrator | Wednesday 18 February 2026 06:34:38 +0000 (0:00:01.133) 0:43:27.499 **** 2026-02-18 06:35:28.065431 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:35:28.065442 | orchestrator | 2026-02-18 06:35:28.065453 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-18 06:35:28.065463 | orchestrator | Wednesday 18 February 2026 06:34:40 +0000 (0:00:02.064) 0:43:29.564 **** 2026-02-18 06:35:28.065474 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:35:28.065485 | orchestrator | 2026-02-18 06:35:28.065495 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-18 06:35:28.065506 | orchestrator | Wednesday 18 February 2026 06:34:42 +0000 (0:00:01.962) 0:43:31.526 **** 2026-02-18 06:35:28.065517 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:35:28.065527 | orchestrator | 2026-02-18 06:35:28.065538 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-18 06:35:28.065549 | orchestrator | Wednesday 18 February 2026 06:34:44 +0000 (0:00:02.278) 0:43:33.805 **** 2026-02-18 06:35:28.065560 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.065570 | orchestrator | 2026-02-18 06:35:28.065581 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-18 06:35:28.065601 | orchestrator | Wednesday 18 February 2026 06:34:46 +0000 (0:00:01.150) 0:43:34.955 **** 2026-02-18 06:35:28.065611 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.065622 | orchestrator | 2026-02-18 06:35:28.065633 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-18 06:35:28.065644 | orchestrator | Wednesday 18 February 2026 06:34:47 +0000 (0:00:01.251) 0:43:36.206 **** 2026-02-18 06:35:28.065655 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-18 06:35:28.065666 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-18 06:35:28.065677 | orchestrator | 2026-02-18 06:35:28.065687 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-18 06:35:28.065698 | orchestrator | Wednesday 18 February 2026 06:34:49 +0000 (0:00:01.853) 0:43:38.060 **** 2026-02-18 06:35:28.065709 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-18 06:35:28.065720 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-18 06:35:28.065730 | orchestrator | 2026-02-18 06:35:28.065741 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-18 06:35:28.065752 | orchestrator | Wednesday 18 February 2026 06:34:52 +0000 (0:00:02.868) 0:43:40.929 **** 2026-02-18 06:35:28.065763 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-18 06:35:28.065794 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-18 06:35:28.065805 | orchestrator | 2026-02-18 06:35:28.065816 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-18 06:35:28.065827 | orchestrator | Wednesday 18 February 2026 06:34:56 +0000 (0:00:04.281) 0:43:45.210 **** 2026-02-18 06:35:28.065838 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.065849 | orchestrator | 2026-02-18 06:35:28.065860 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-18 06:35:28.065871 | orchestrator | Wednesday 18 February 2026 06:34:57 +0000 (0:00:00.894) 0:43:46.105 **** 2026-02-18 06:35:28.065882 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.065896 | orchestrator | 2026-02-18 06:35:28.065914 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-18 06:35:28.065969 | orchestrator | Wednesday 18 February 2026 06:34:58 +0000 (0:00:00.912) 0:43:47.017 **** 2026-02-18 06:35:28.065990 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.066007 | orchestrator | 2026-02-18 06:35:28.066102 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-18 06:35:28.066122 | orchestrator | Wednesday 18 February 2026 06:34:58 +0000 (0:00:00.853) 0:43:47.871 **** 2026-02-18 06:35:28.066133 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.066144 | orchestrator | 2026-02-18 06:35:28.066155 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-18 06:35:28.066166 | orchestrator | Wednesday 18 February 2026 06:34:59 +0000 (0:00:00.785) 0:43:48.657 **** 2026-02-18 06:35:28.066176 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:35:28.066187 | orchestrator | 2026-02-18 06:35:28.066198 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-18 06:35:28.066208 | orchestrator | Wednesday 18 February 2026 06:35:00 +0000 (0:00:00.780) 0:43:49.438 **** 2026-02-18 06:35:28.066219 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-18 06:35:28.066231 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-18 06:35:28.066242 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-18 06:35:28.066252 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-18 06:35:28.066263 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:35:28.066274 | orchestrator | 2026-02-18 06:35:28.066285 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-18 06:35:28.066296 | orchestrator | 2026-02-18 06:35:28.066317 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:35:28.066336 | orchestrator | Wednesday 18 February 2026 06:35:14 +0000 (0:00:14.263) 0:44:03.701 **** 2026-02-18 06:35:28.066347 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-18 06:35:28.066357 | orchestrator | 2026-02-18 06:35:28.066368 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:35:28.066379 | orchestrator | Wednesday 18 February 2026 06:35:16 +0000 (0:00:01.199) 0:44:04.901 **** 2026-02-18 06:35:28.066390 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:28.066401 | orchestrator | 2026-02-18 06:35:28.066412 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:35:28.066423 | orchestrator | Wednesday 18 February 2026 06:35:17 +0000 (0:00:01.484) 0:44:06.386 **** 2026-02-18 06:35:28.066433 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:28.066444 | orchestrator | 2026-02-18 06:35:28.066455 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:35:28.066465 | orchestrator | Wednesday 18 February 2026 06:35:18 +0000 (0:00:01.121) 0:44:07.508 **** 2026-02-18 06:35:28.066476 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:28.066487 | orchestrator | 2026-02-18 06:35:28.066497 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:35:28.066508 | orchestrator | Wednesday 18 February 2026 06:35:20 +0000 (0:00:01.494) 0:44:09.003 **** 2026-02-18 06:35:28.066519 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:28.066530 | orchestrator | 2026-02-18 06:35:28.066540 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:35:28.066551 | orchestrator | Wednesday 18 February 2026 06:35:21 +0000 (0:00:01.208) 0:44:10.211 **** 2026-02-18 06:35:28.066562 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:28.066573 | orchestrator | 2026-02-18 06:35:28.066584 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:35:28.066594 | orchestrator | Wednesday 18 February 2026 06:35:22 +0000 (0:00:01.184) 0:44:11.395 **** 2026-02-18 06:35:28.066605 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:28.066616 | orchestrator | 2026-02-18 06:35:28.066627 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:35:28.066638 | orchestrator | Wednesday 18 February 2026 06:35:23 +0000 (0:00:01.155) 0:44:12.551 **** 2026-02-18 06:35:28.066648 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:28.066659 | orchestrator | 2026-02-18 06:35:28.066670 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:35:28.066681 | orchestrator | Wednesday 18 February 2026 06:35:24 +0000 (0:00:01.199) 0:44:13.750 **** 2026-02-18 06:35:28.066692 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:28.066703 | orchestrator | 2026-02-18 06:35:28.066714 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:35:28.066725 | orchestrator | Wednesday 18 February 2026 06:35:26 +0000 (0:00:01.142) 0:44:14.892 **** 2026-02-18 06:35:28.066736 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:35:28.066746 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:35:28.066757 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:35:28.066768 | orchestrator | 2026-02-18 06:35:28.066789 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:35:53.622538 | orchestrator | Wednesday 18 February 2026 06:35:28 +0000 (0:00:02.036) 0:44:16.929 **** 2026-02-18 06:35:53.622645 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:53.622662 | orchestrator | 2026-02-18 06:35:53.622675 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:35:53.622687 | orchestrator | Wednesday 18 February 2026 06:35:29 +0000 (0:00:01.390) 0:44:18.320 **** 2026-02-18 06:35:53.622698 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:35:53.622741 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:35:53.622761 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:35:53.622779 | orchestrator | 2026-02-18 06:35:53.622797 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:35:53.622815 | orchestrator | Wednesday 18 February 2026 06:35:32 +0000 (0:00:03.351) 0:44:21.671 **** 2026-02-18 06:35:53.622835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 06:35:53.622855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 06:35:53.622873 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 06:35:53.622889 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.622900 | orchestrator | 2026-02-18 06:35:53.622937 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:35:53.622950 | orchestrator | Wednesday 18 February 2026 06:35:34 +0000 (0:00:01.460) 0:44:23.131 **** 2026-02-18 06:35:53.622962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:35:53.622977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:35:53.622989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:35:53.623000 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623011 | orchestrator | 2026-02-18 06:35:53.623037 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:35:53.623049 | orchestrator | Wednesday 18 February 2026 06:35:35 +0000 (0:00:01.698) 0:44:24.830 **** 2026-02-18 06:35:53.623064 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:53.623081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:53.623094 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:53.623114 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623133 | orchestrator | 2026-02-18 06:35:53.623151 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:35:53.623171 | orchestrator | Wednesday 18 February 2026 06:35:37 +0000 (0:00:01.234) 0:44:26.064 **** 2026-02-18 06:35:53.623217 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:35:30.331175', 'end': '2026-02-18 06:35:30.373614', 'delta': '0:00:00.042439', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:35:53.623255 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:35:30.922627', 'end': '2026-02-18 06:35:30.970017', 'delta': '0:00:00.047390', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:35:53.623270 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:35:31.505658', 'end': '2026-02-18 06:35:31.558457', 'delta': '0:00:00.052799', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:35:53.623283 | orchestrator | 2026-02-18 06:35:53.623296 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:35:53.623309 | orchestrator | Wednesday 18 February 2026 06:35:38 +0000 (0:00:01.313) 0:44:27.377 **** 2026-02-18 06:35:53.623321 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:53.623334 | orchestrator | 2026-02-18 06:35:53.623352 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:35:53.623365 | orchestrator | Wednesday 18 February 2026 06:35:39 +0000 (0:00:01.264) 0:44:28.642 **** 2026-02-18 06:35:53.623377 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623390 | orchestrator | 2026-02-18 06:35:53.623402 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:35:53.623415 | orchestrator | Wednesday 18 February 2026 06:35:40 +0000 (0:00:01.234) 0:44:29.876 **** 2026-02-18 06:35:53.623427 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:53.623437 | orchestrator | 2026-02-18 06:35:53.623448 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:35:53.623459 | orchestrator | Wednesday 18 February 2026 06:35:42 +0000 (0:00:01.137) 0:44:31.014 **** 2026-02-18 06:35:53.623471 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:35:53.623489 | orchestrator | 2026-02-18 06:35:53.623508 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:35:53.623526 | orchestrator | Wednesday 18 February 2026 06:35:44 +0000 (0:00:02.032) 0:44:33.047 **** 2026-02-18 06:35:53.623544 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:53.623563 | orchestrator | 2026-02-18 06:35:53.623581 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:35:53.623601 | orchestrator | Wednesday 18 February 2026 06:35:45 +0000 (0:00:01.202) 0:44:34.249 **** 2026-02-18 06:35:53.623620 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623638 | orchestrator | 2026-02-18 06:35:53.623659 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:35:53.623670 | orchestrator | Wednesday 18 February 2026 06:35:46 +0000 (0:00:01.120) 0:44:35.369 **** 2026-02-18 06:35:53.623680 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623691 | orchestrator | 2026-02-18 06:35:53.623702 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:35:53.623713 | orchestrator | Wednesday 18 February 2026 06:35:47 +0000 (0:00:01.242) 0:44:36.612 **** 2026-02-18 06:35:53.623723 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623734 | orchestrator | 2026-02-18 06:35:53.623745 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:35:53.623755 | orchestrator | Wednesday 18 February 2026 06:35:48 +0000 (0:00:01.134) 0:44:37.747 **** 2026-02-18 06:35:53.623766 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623777 | orchestrator | 2026-02-18 06:35:53.623787 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:35:53.623798 | orchestrator | Wednesday 18 February 2026 06:35:50 +0000 (0:00:01.298) 0:44:39.045 **** 2026-02-18 06:35:53.623809 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:53.623819 | orchestrator | 2026-02-18 06:35:53.623830 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:35:53.623841 | orchestrator | Wednesday 18 February 2026 06:35:51 +0000 (0:00:01.171) 0:44:40.217 **** 2026-02-18 06:35:53.623851 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:53.623866 | orchestrator | 2026-02-18 06:35:53.623885 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:35:53.623902 | orchestrator | Wednesday 18 February 2026 06:35:52 +0000 (0:00:01.113) 0:44:41.331 **** 2026-02-18 06:35:53.623944 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:53.623963 | orchestrator | 2026-02-18 06:35:53.623993 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:35:56.245783 | orchestrator | Wednesday 18 February 2026 06:35:53 +0000 (0:00:01.154) 0:44:42.485 **** 2026-02-18 06:35:56.245881 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:56.245896 | orchestrator | 2026-02-18 06:35:56.245967 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:35:56.245988 | orchestrator | Wednesday 18 February 2026 06:35:54 +0000 (0:00:01.148) 0:44:43.634 **** 2026-02-18 06:35:56.245999 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:35:56.246010 | orchestrator | 2026-02-18 06:35:56.246079 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:35:56.246090 | orchestrator | Wednesday 18 February 2026 06:35:56 +0000 (0:00:01.257) 0:44:44.892 **** 2026-02-18 06:35:56.246102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:56.246118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}})  2026-02-18 06:35:56.246148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:35:56.246181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}})  2026-02-18 06:35:56.246193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:56.246204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:56.246232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:35:56.246244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:56.246254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:35:56.246270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:56.246287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}})  2026-02-18 06:35:56.246297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}})  2026-02-18 06:35:56.246308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:56.246331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:35:57.675094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:57.675198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:35:57.675216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:35:57.675231 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:35:57.675245 | orchestrator | 2026-02-18 06:35:57.675258 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:35:57.675270 | orchestrator | Wednesday 18 February 2026 06:35:57 +0000 (0:00:01.407) 0:44:46.300 **** 2026-02-18 06:35:57.675283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675367 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675396 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675419 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:35:57.675458 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336414 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336544 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:36:03.336701 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:03.336715 | orchestrator | 2026-02-18 06:36:03.336727 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:36:03.336740 | orchestrator | Wednesday 18 February 2026 06:35:58 +0000 (0:00:01.467) 0:44:47.768 **** 2026-02-18 06:36:03.336751 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:03.336763 | orchestrator | 2026-02-18 06:36:03.336803 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:36:03.336814 | orchestrator | Wednesday 18 February 2026 06:36:00 +0000 (0:00:01.604) 0:44:49.372 **** 2026-02-18 06:36:03.336826 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:03.336837 | orchestrator | 2026-02-18 06:36:03.336848 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:36:03.336859 | orchestrator | Wednesday 18 February 2026 06:36:01 +0000 (0:00:01.199) 0:44:50.572 **** 2026-02-18 06:36:03.336870 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:03.336881 | orchestrator | 2026-02-18 06:36:03.336898 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:36:03.336946 | orchestrator | Wednesday 18 February 2026 06:36:03 +0000 (0:00:01.630) 0:44:52.202 **** 2026-02-18 06:36:46.051066 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051183 | orchestrator | 2026-02-18 06:36:46.051200 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:36:46.051214 | orchestrator | Wednesday 18 February 2026 06:36:04 +0000 (0:00:01.230) 0:44:53.433 **** 2026-02-18 06:36:46.051226 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051237 | orchestrator | 2026-02-18 06:36:46.051248 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:36:46.051259 | orchestrator | Wednesday 18 February 2026 06:36:05 +0000 (0:00:01.322) 0:44:54.755 **** 2026-02-18 06:36:46.051270 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051281 | orchestrator | 2026-02-18 06:36:46.051292 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:36:46.051303 | orchestrator | Wednesday 18 February 2026 06:36:07 +0000 (0:00:01.195) 0:44:55.951 **** 2026-02-18 06:36:46.051315 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-18 06:36:46.051326 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-18 06:36:46.051336 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-18 06:36:46.051347 | orchestrator | 2026-02-18 06:36:46.051358 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:36:46.051369 | orchestrator | Wednesday 18 February 2026 06:36:08 +0000 (0:00:01.695) 0:44:57.647 **** 2026-02-18 06:36:46.051380 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 06:36:46.051391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 06:36:46.051401 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 06:36:46.051412 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051423 | orchestrator | 2026-02-18 06:36:46.051434 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:36:46.051445 | orchestrator | Wednesday 18 February 2026 06:36:10 +0000 (0:00:01.263) 0:44:58.910 **** 2026-02-18 06:36:46.051456 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-18 06:36:46.051467 | orchestrator | 2026-02-18 06:36:46.051479 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:36:46.051491 | orchestrator | Wednesday 18 February 2026 06:36:11 +0000 (0:00:01.130) 0:45:00.040 **** 2026-02-18 06:36:46.051526 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051538 | orchestrator | 2026-02-18 06:36:46.051549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:36:46.051560 | orchestrator | Wednesday 18 February 2026 06:36:12 +0000 (0:00:01.179) 0:45:01.220 **** 2026-02-18 06:36:46.051571 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051582 | orchestrator | 2026-02-18 06:36:46.051593 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:36:46.051607 | orchestrator | Wednesday 18 February 2026 06:36:13 +0000 (0:00:01.171) 0:45:02.391 **** 2026-02-18 06:36:46.051619 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051632 | orchestrator | 2026-02-18 06:36:46.051644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:36:46.051657 | orchestrator | Wednesday 18 February 2026 06:36:14 +0000 (0:00:01.220) 0:45:03.612 **** 2026-02-18 06:36:46.051670 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.051683 | orchestrator | 2026-02-18 06:36:46.051696 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:36:46.051709 | orchestrator | Wednesday 18 February 2026 06:36:15 +0000 (0:00:01.248) 0:45:04.860 **** 2026-02-18 06:36:46.051722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:36:46.051734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:36:46.051747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:36:46.051759 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051772 | orchestrator | 2026-02-18 06:36:46.051784 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:36:46.051797 | orchestrator | Wednesday 18 February 2026 06:36:17 +0000 (0:00:01.457) 0:45:06.317 **** 2026-02-18 06:36:46.051809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:36:46.051822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:36:46.051834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:36:46.051847 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051859 | orchestrator | 2026-02-18 06:36:46.051872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:36:46.051884 | orchestrator | Wednesday 18 February 2026 06:36:18 +0000 (0:00:01.423) 0:45:07.740 **** 2026-02-18 06:36:46.051922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:36:46.051934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:36:46.051946 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:36:46.051959 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.051972 | orchestrator | 2026-02-18 06:36:46.051984 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:36:46.051996 | orchestrator | Wednesday 18 February 2026 06:36:20 +0000 (0:00:01.785) 0:45:09.527 **** 2026-02-18 06:36:46.052007 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.052018 | orchestrator | 2026-02-18 06:36:46.052029 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:36:46.052040 | orchestrator | Wednesday 18 February 2026 06:36:21 +0000 (0:00:01.238) 0:45:10.765 **** 2026-02-18 06:36:46.052050 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 06:36:46.052061 | orchestrator | 2026-02-18 06:36:46.052072 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:36:46.052098 | orchestrator | Wednesday 18 February 2026 06:36:23 +0000 (0:00:01.745) 0:45:12.510 **** 2026-02-18 06:36:46.052127 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:36:46.052139 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:36:46.052150 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:36:46.052170 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:36:46.052181 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:36:46.052192 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-18 06:36:46.052203 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:36:46.052213 | orchestrator | 2026-02-18 06:36:46.052224 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:36:46.052235 | orchestrator | Wednesday 18 February 2026 06:36:25 +0000 (0:00:01.874) 0:45:14.385 **** 2026-02-18 06:36:46.052246 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:36:46.052257 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:36:46.052267 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:36:46.052278 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:36:46.052289 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:36:46.052300 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-18 06:36:46.052310 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:36:46.052321 | orchestrator | 2026-02-18 06:36:46.052332 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-18 06:36:46.052343 | orchestrator | Wednesday 18 February 2026 06:36:27 +0000 (0:00:02.237) 0:45:16.623 **** 2026-02-18 06:36:46.052354 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.052364 | orchestrator | 2026-02-18 06:36:46.052375 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-18 06:36:46.052386 | orchestrator | Wednesday 18 February 2026 06:36:28 +0000 (0:00:01.124) 0:45:17.747 **** 2026-02-18 06:36:46.052397 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.052408 | orchestrator | 2026-02-18 06:36:46.052418 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-18 06:36:46.052429 | orchestrator | Wednesday 18 February 2026 06:36:29 +0000 (0:00:00.795) 0:45:18.542 **** 2026-02-18 06:36:46.052441 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.052458 | orchestrator | 2026-02-18 06:36:46.052476 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-18 06:36:46.052488 | orchestrator | Wednesday 18 February 2026 06:36:30 +0000 (0:00:00.936) 0:45:19.478 **** 2026-02-18 06:36:46.052499 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-18 06:36:46.052510 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-18 06:36:46.052521 | orchestrator | 2026-02-18 06:36:46.052532 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:36:46.052543 | orchestrator | Wednesday 18 February 2026 06:36:34 +0000 (0:00:03.852) 0:45:23.331 **** 2026-02-18 06:36:46.052554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-18 06:36:46.052565 | orchestrator | 2026-02-18 06:36:46.052575 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:36:46.052586 | orchestrator | Wednesday 18 February 2026 06:36:35 +0000 (0:00:01.196) 0:45:24.528 **** 2026-02-18 06:36:46.052597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-18 06:36:46.052608 | orchestrator | 2026-02-18 06:36:46.052619 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:36:46.052629 | orchestrator | Wednesday 18 February 2026 06:36:36 +0000 (0:00:01.157) 0:45:25.685 **** 2026-02-18 06:36:46.052640 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.052651 | orchestrator | 2026-02-18 06:36:46.052662 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:36:46.052673 | orchestrator | Wednesday 18 February 2026 06:36:37 +0000 (0:00:01.151) 0:45:26.837 **** 2026-02-18 06:36:46.052691 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.052702 | orchestrator | 2026-02-18 06:36:46.052713 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:36:46.052723 | orchestrator | Wednesday 18 February 2026 06:36:39 +0000 (0:00:01.556) 0:45:28.394 **** 2026-02-18 06:36:46.052734 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.052745 | orchestrator | 2026-02-18 06:36:46.052756 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:36:46.052767 | orchestrator | Wednesday 18 February 2026 06:36:41 +0000 (0:00:01.535) 0:45:29.929 **** 2026-02-18 06:36:46.052778 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:36:46.052789 | orchestrator | 2026-02-18 06:36:46.052799 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:36:46.052810 | orchestrator | Wednesday 18 February 2026 06:36:42 +0000 (0:00:01.499) 0:45:31.428 **** 2026-02-18 06:36:46.052821 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.052832 | orchestrator | 2026-02-18 06:36:46.052842 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:36:46.052853 | orchestrator | Wednesday 18 February 2026 06:36:43 +0000 (0:00:01.145) 0:45:32.574 **** 2026-02-18 06:36:46.052864 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.052875 | orchestrator | 2026-02-18 06:36:46.052885 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:36:46.052931 | orchestrator | Wednesday 18 February 2026 06:36:44 +0000 (0:00:01.144) 0:45:33.718 **** 2026-02-18 06:36:46.052943 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:36:46.052954 | orchestrator | 2026-02-18 06:36:46.052972 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:37:26.766167 | orchestrator | Wednesday 18 February 2026 06:36:46 +0000 (0:00:01.190) 0:45:34.909 **** 2026-02-18 06:37:26.766304 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.766322 | orchestrator | 2026-02-18 06:37:26.766336 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:37:26.766348 | orchestrator | Wednesday 18 February 2026 06:36:47 +0000 (0:00:01.558) 0:45:36.468 **** 2026-02-18 06:37:26.766359 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.766370 | orchestrator | 2026-02-18 06:37:26.766382 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:37:26.766393 | orchestrator | Wednesday 18 February 2026 06:36:49 +0000 (0:00:01.565) 0:45:38.034 **** 2026-02-18 06:37:26.766404 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.766416 | orchestrator | 2026-02-18 06:37:26.766427 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:37:26.766438 | orchestrator | Wednesday 18 February 2026 06:36:49 +0000 (0:00:00.797) 0:45:38.832 **** 2026-02-18 06:37:26.766449 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.766460 | orchestrator | 2026-02-18 06:37:26.766471 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:37:26.766482 | orchestrator | Wednesday 18 February 2026 06:36:50 +0000 (0:00:00.789) 0:45:39.621 **** 2026-02-18 06:37:26.766493 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.766504 | orchestrator | 2026-02-18 06:37:26.766518 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:37:26.766541 | orchestrator | Wednesday 18 February 2026 06:36:51 +0000 (0:00:00.800) 0:45:40.422 **** 2026-02-18 06:37:26.766569 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.766587 | orchestrator | 2026-02-18 06:37:26.766605 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:37:26.766623 | orchestrator | Wednesday 18 February 2026 06:36:52 +0000 (0:00:00.817) 0:45:41.240 **** 2026-02-18 06:37:26.766639 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.766659 | orchestrator | 2026-02-18 06:37:26.766678 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:37:26.766696 | orchestrator | Wednesday 18 February 2026 06:36:53 +0000 (0:00:00.788) 0:45:42.029 **** 2026-02-18 06:37:26.766750 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.766767 | orchestrator | 2026-02-18 06:37:26.766780 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:37:26.766792 | orchestrator | Wednesday 18 February 2026 06:36:53 +0000 (0:00:00.789) 0:45:42.818 **** 2026-02-18 06:37:26.766805 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.766823 | orchestrator | 2026-02-18 06:37:26.766841 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:37:26.766858 | orchestrator | Wednesday 18 February 2026 06:36:54 +0000 (0:00:00.862) 0:45:43.681 **** 2026-02-18 06:37:26.766913 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.766927 | orchestrator | 2026-02-18 06:37:26.766938 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:37:26.766948 | orchestrator | Wednesday 18 February 2026 06:36:55 +0000 (0:00:00.808) 0:45:44.490 **** 2026-02-18 06:37:26.766959 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.766970 | orchestrator | 2026-02-18 06:37:26.766981 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:37:26.766992 | orchestrator | Wednesday 18 February 2026 06:36:56 +0000 (0:00:00.837) 0:45:45.328 **** 2026-02-18 06:37:26.767002 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.767013 | orchestrator | 2026-02-18 06:37:26.767024 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:37:26.767034 | orchestrator | Wednesday 18 February 2026 06:36:57 +0000 (0:00:00.805) 0:45:46.133 **** 2026-02-18 06:37:26.767045 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767056 | orchestrator | 2026-02-18 06:37:26.767067 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:37:26.767077 | orchestrator | Wednesday 18 February 2026 06:36:58 +0000 (0:00:00.793) 0:45:46.927 **** 2026-02-18 06:37:26.767088 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767099 | orchestrator | 2026-02-18 06:37:26.767110 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:37:26.767121 | orchestrator | Wednesday 18 February 2026 06:36:58 +0000 (0:00:00.773) 0:45:47.700 **** 2026-02-18 06:37:26.767132 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767143 | orchestrator | 2026-02-18 06:37:26.767153 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:37:26.767164 | orchestrator | Wednesday 18 February 2026 06:36:59 +0000 (0:00:00.820) 0:45:48.521 **** 2026-02-18 06:37:26.767175 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767186 | orchestrator | 2026-02-18 06:37:26.767196 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:37:26.767207 | orchestrator | Wednesday 18 February 2026 06:37:00 +0000 (0:00:00.791) 0:45:49.313 **** 2026-02-18 06:37:26.767218 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767229 | orchestrator | 2026-02-18 06:37:26.767239 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:37:26.767250 | orchestrator | Wednesday 18 February 2026 06:37:01 +0000 (0:00:00.836) 0:45:50.150 **** 2026-02-18 06:37:26.767261 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767271 | orchestrator | 2026-02-18 06:37:26.767282 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:37:26.767293 | orchestrator | Wednesday 18 February 2026 06:37:02 +0000 (0:00:00.756) 0:45:50.907 **** 2026-02-18 06:37:26.767304 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767314 | orchestrator | 2026-02-18 06:37:26.767325 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:37:26.767338 | orchestrator | Wednesday 18 February 2026 06:37:02 +0000 (0:00:00.760) 0:45:51.668 **** 2026-02-18 06:37:26.767348 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767359 | orchestrator | 2026-02-18 06:37:26.767385 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:37:26.767396 | orchestrator | Wednesday 18 February 2026 06:37:03 +0000 (0:00:00.817) 0:45:52.485 **** 2026-02-18 06:37:26.767438 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767450 | orchestrator | 2026-02-18 06:37:26.767461 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:37:26.767472 | orchestrator | Wednesday 18 February 2026 06:37:04 +0000 (0:00:00.877) 0:45:53.363 **** 2026-02-18 06:37:26.767483 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767494 | orchestrator | 2026-02-18 06:37:26.767505 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:37:26.767516 | orchestrator | Wednesday 18 February 2026 06:37:05 +0000 (0:00:00.767) 0:45:54.130 **** 2026-02-18 06:37:26.767527 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767538 | orchestrator | 2026-02-18 06:37:26.767549 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:37:26.767560 | orchestrator | Wednesday 18 February 2026 06:37:06 +0000 (0:00:00.763) 0:45:54.894 **** 2026-02-18 06:37:26.767570 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767581 | orchestrator | 2026-02-18 06:37:26.767592 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:37:26.767603 | orchestrator | Wednesday 18 February 2026 06:37:06 +0000 (0:00:00.784) 0:45:55.679 **** 2026-02-18 06:37:26.767614 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.767625 | orchestrator | 2026-02-18 06:37:26.767636 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:37:26.767647 | orchestrator | Wednesday 18 February 2026 06:37:08 +0000 (0:00:01.569) 0:45:57.248 **** 2026-02-18 06:37:26.767658 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.767668 | orchestrator | 2026-02-18 06:37:26.767679 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:37:26.767690 | orchestrator | Wednesday 18 February 2026 06:37:10 +0000 (0:00:01.870) 0:45:59.119 **** 2026-02-18 06:37:26.767703 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-18 06:37:26.767724 | orchestrator | 2026-02-18 06:37:26.767743 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:37:26.767761 | orchestrator | Wednesday 18 February 2026 06:37:11 +0000 (0:00:01.221) 0:46:00.340 **** 2026-02-18 06:37:26.767779 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767797 | orchestrator | 2026-02-18 06:37:26.767813 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:37:26.767831 | orchestrator | Wednesday 18 February 2026 06:37:12 +0000 (0:00:01.180) 0:46:01.521 **** 2026-02-18 06:37:26.767848 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.767867 | orchestrator | 2026-02-18 06:37:26.767932 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:37:26.767953 | orchestrator | Wednesday 18 February 2026 06:37:13 +0000 (0:00:01.127) 0:46:02.649 **** 2026-02-18 06:37:26.767971 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:37:26.767989 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:37:26.768010 | orchestrator | 2026-02-18 06:37:26.768028 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:37:26.768046 | orchestrator | Wednesday 18 February 2026 06:37:16 +0000 (0:00:02.290) 0:46:04.939 **** 2026-02-18 06:37:26.768064 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.768083 | orchestrator | 2026-02-18 06:37:26.768101 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:37:26.768120 | orchestrator | Wednesday 18 February 2026 06:37:17 +0000 (0:00:01.505) 0:46:06.445 **** 2026-02-18 06:37:26.768138 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.768157 | orchestrator | 2026-02-18 06:37:26.768168 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:37:26.768179 | orchestrator | Wednesday 18 February 2026 06:37:18 +0000 (0:00:01.234) 0:46:07.679 **** 2026-02-18 06:37:26.768201 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.768212 | orchestrator | 2026-02-18 06:37:26.768222 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:37:26.768233 | orchestrator | Wednesday 18 February 2026 06:37:19 +0000 (0:00:00.869) 0:46:08.549 **** 2026-02-18 06:37:26.768243 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.768254 | orchestrator | 2026-02-18 06:37:26.768265 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:37:26.768275 | orchestrator | Wednesday 18 February 2026 06:37:20 +0000 (0:00:00.788) 0:46:09.338 **** 2026-02-18 06:37:26.768286 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-18 06:37:26.768297 | orchestrator | 2026-02-18 06:37:26.768308 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:37:26.768318 | orchestrator | Wednesday 18 February 2026 06:37:21 +0000 (0:00:01.097) 0:46:10.436 **** 2026-02-18 06:37:26.768329 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:37:26.768340 | orchestrator | 2026-02-18 06:37:26.768350 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:37:26.768361 | orchestrator | Wednesday 18 February 2026 06:37:23 +0000 (0:00:01.722) 0:46:12.158 **** 2026-02-18 06:37:26.768371 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:37:26.768382 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:37:26.768392 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:37:26.768403 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.768413 | orchestrator | 2026-02-18 06:37:26.768424 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:37:26.768435 | orchestrator | Wednesday 18 February 2026 06:37:24 +0000 (0:00:01.177) 0:46:13.336 **** 2026-02-18 06:37:26.768445 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:37:26.768456 | orchestrator | 2026-02-18 06:37:26.768475 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:37:26.768486 | orchestrator | Wednesday 18 February 2026 06:37:25 +0000 (0:00:01.120) 0:46:14.457 **** 2026-02-18 06:37:26.768506 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177113 | orchestrator | 2026-02-18 06:38:10.177235 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:38:10.177254 | orchestrator | Wednesday 18 February 2026 06:37:26 +0000 (0:00:01.174) 0:46:15.631 **** 2026-02-18 06:38:10.177266 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177278 | orchestrator | 2026-02-18 06:38:10.177289 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:38:10.177300 | orchestrator | Wednesday 18 February 2026 06:37:27 +0000 (0:00:01.226) 0:46:16.858 **** 2026-02-18 06:38:10.177313 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177325 | orchestrator | 2026-02-18 06:38:10.177337 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:38:10.177349 | orchestrator | Wednesday 18 February 2026 06:37:29 +0000 (0:00:01.186) 0:46:18.044 **** 2026-02-18 06:38:10.177361 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177373 | orchestrator | 2026-02-18 06:38:10.177385 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:38:10.177397 | orchestrator | Wednesday 18 February 2026 06:37:29 +0000 (0:00:00.819) 0:46:18.864 **** 2026-02-18 06:38:10.177409 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:10.177423 | orchestrator | 2026-02-18 06:38:10.177435 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:38:10.177448 | orchestrator | Wednesday 18 February 2026 06:37:32 +0000 (0:00:02.094) 0:46:20.958 **** 2026-02-18 06:38:10.177460 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:10.177472 | orchestrator | 2026-02-18 06:38:10.177484 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:38:10.177523 | orchestrator | Wednesday 18 February 2026 06:37:32 +0000 (0:00:00.813) 0:46:21.772 **** 2026-02-18 06:38:10.177536 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-18 06:38:10.177548 | orchestrator | 2026-02-18 06:38:10.177561 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:38:10.177573 | orchestrator | Wednesday 18 February 2026 06:37:34 +0000 (0:00:01.287) 0:46:23.059 **** 2026-02-18 06:38:10.177585 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177598 | orchestrator | 2026-02-18 06:38:10.177609 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:38:10.177620 | orchestrator | Wednesday 18 February 2026 06:37:35 +0000 (0:00:01.189) 0:46:24.249 **** 2026-02-18 06:38:10.177633 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177646 | orchestrator | 2026-02-18 06:38:10.177659 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:38:10.177672 | orchestrator | Wednesday 18 February 2026 06:37:36 +0000 (0:00:01.200) 0:46:25.449 **** 2026-02-18 06:38:10.177686 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177699 | orchestrator | 2026-02-18 06:38:10.177712 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:38:10.177726 | orchestrator | Wednesday 18 February 2026 06:37:37 +0000 (0:00:01.151) 0:46:26.601 **** 2026-02-18 06:38:10.177740 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177754 | orchestrator | 2026-02-18 06:38:10.177767 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:38:10.177780 | orchestrator | Wednesday 18 February 2026 06:37:38 +0000 (0:00:01.130) 0:46:27.731 **** 2026-02-18 06:38:10.177793 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177805 | orchestrator | 2026-02-18 06:38:10.177817 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:38:10.177828 | orchestrator | Wednesday 18 February 2026 06:37:40 +0000 (0:00:01.170) 0:46:28.901 **** 2026-02-18 06:38:10.177839 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177851 | orchestrator | 2026-02-18 06:38:10.177891 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:38:10.177904 | orchestrator | Wednesday 18 February 2026 06:37:41 +0000 (0:00:01.166) 0:46:30.068 **** 2026-02-18 06:38:10.177916 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177928 | orchestrator | 2026-02-18 06:38:10.177940 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:38:10.177950 | orchestrator | Wednesday 18 February 2026 06:37:42 +0000 (0:00:01.139) 0:46:31.208 **** 2026-02-18 06:38:10.177961 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.177972 | orchestrator | 2026-02-18 06:38:10.177984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:38:10.177995 | orchestrator | Wednesday 18 February 2026 06:37:43 +0000 (0:00:01.298) 0:46:32.506 **** 2026-02-18 06:38:10.178005 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:10.178082 | orchestrator | 2026-02-18 06:38:10.178096 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:38:10.178107 | orchestrator | Wednesday 18 February 2026 06:37:44 +0000 (0:00:00.799) 0:46:33.306 **** 2026-02-18 06:38:10.178117 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-18 06:38:10.178129 | orchestrator | 2026-02-18 06:38:10.178141 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:38:10.178152 | orchestrator | Wednesday 18 February 2026 06:37:45 +0000 (0:00:01.381) 0:46:34.688 **** 2026-02-18 06:38:10.178163 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-18 06:38:10.178176 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-18 06:38:10.178183 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-18 06:38:10.178190 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-18 06:38:10.178197 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-18 06:38:10.178215 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-18 06:38:10.178223 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-18 06:38:10.178249 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:38:10.178261 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:38:10.178294 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:38:10.178305 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:38:10.178315 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:38:10.178326 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:38:10.178336 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:38:10.178347 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-18 06:38:10.178358 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-18 06:38:10.178369 | orchestrator | 2026-02-18 06:38:10.178381 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:38:10.178389 | orchestrator | Wednesday 18 February 2026 06:37:51 +0000 (0:00:06.027) 0:46:40.715 **** 2026-02-18 06:38:10.178395 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-18 06:38:10.178402 | orchestrator | 2026-02-18 06:38:10.178408 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 06:38:10.178416 | orchestrator | Wednesday 18 February 2026 06:37:53 +0000 (0:00:01.279) 0:46:41.995 **** 2026-02-18 06:38:10.178428 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 06:38:10.178437 | orchestrator | 2026-02-18 06:38:10.178443 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 06:38:10.178450 | orchestrator | Wednesday 18 February 2026 06:37:54 +0000 (0:00:01.539) 0:46:43.535 **** 2026-02-18 06:38:10.178456 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 06:38:10.178463 | orchestrator | 2026-02-18 06:38:10.178469 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:38:10.178476 | orchestrator | Wednesday 18 February 2026 06:37:56 +0000 (0:00:01.656) 0:46:45.191 **** 2026-02-18 06:38:10.178483 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178489 | orchestrator | 2026-02-18 06:38:10.178496 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:38:10.178502 | orchestrator | Wednesday 18 February 2026 06:37:57 +0000 (0:00:00.870) 0:46:46.062 **** 2026-02-18 06:38:10.178509 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178516 | orchestrator | 2026-02-18 06:38:10.178522 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:38:10.178529 | orchestrator | Wednesday 18 February 2026 06:37:57 +0000 (0:00:00.812) 0:46:46.875 **** 2026-02-18 06:38:10.178535 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178542 | orchestrator | 2026-02-18 06:38:10.178548 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:38:10.178555 | orchestrator | Wednesday 18 February 2026 06:37:58 +0000 (0:00:00.797) 0:46:47.672 **** 2026-02-18 06:38:10.178562 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178568 | orchestrator | 2026-02-18 06:38:10.178575 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:38:10.178581 | orchestrator | Wednesday 18 February 2026 06:37:59 +0000 (0:00:00.841) 0:46:48.514 **** 2026-02-18 06:38:10.178588 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178594 | orchestrator | 2026-02-18 06:38:10.178601 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:38:10.178608 | orchestrator | Wednesday 18 February 2026 06:38:00 +0000 (0:00:00.779) 0:46:49.294 **** 2026-02-18 06:38:10.178622 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178629 | orchestrator | 2026-02-18 06:38:10.178635 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:38:10.178642 | orchestrator | Wednesday 18 February 2026 06:38:01 +0000 (0:00:00.767) 0:46:50.062 **** 2026-02-18 06:38:10.178648 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178655 | orchestrator | 2026-02-18 06:38:10.178661 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:38:10.178668 | orchestrator | Wednesday 18 February 2026 06:38:01 +0000 (0:00:00.786) 0:46:50.849 **** 2026-02-18 06:38:10.178675 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178681 | orchestrator | 2026-02-18 06:38:10.178688 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:38:10.178694 | orchestrator | Wednesday 18 February 2026 06:38:02 +0000 (0:00:00.800) 0:46:51.649 **** 2026-02-18 06:38:10.178701 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178707 | orchestrator | 2026-02-18 06:38:10.178714 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:38:10.178720 | orchestrator | Wednesday 18 February 2026 06:38:03 +0000 (0:00:00.775) 0:46:52.424 **** 2026-02-18 06:38:10.178727 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:10.178733 | orchestrator | 2026-02-18 06:38:10.178740 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:38:10.178746 | orchestrator | Wednesday 18 February 2026 06:38:04 +0000 (0:00:00.796) 0:46:53.220 **** 2026-02-18 06:38:10.178753 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:10.178759 | orchestrator | 2026-02-18 06:38:10.178766 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:38:10.178772 | orchestrator | Wednesday 18 February 2026 06:38:05 +0000 (0:00:00.945) 0:46:54.166 **** 2026-02-18 06:38:10.178779 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:38:10.178785 | orchestrator | 2026-02-18 06:38:10.178792 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:38:10.178803 | orchestrator | Wednesday 18 February 2026 06:38:09 +0000 (0:00:04.014) 0:46:58.180 **** 2026-02-18 06:38:10.178816 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 06:38:51.688379 | orchestrator | 2026-02-18 06:38:51.688500 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:38:51.688517 | orchestrator | Wednesday 18 February 2026 06:38:10 +0000 (0:00:00.863) 0:46:59.043 **** 2026-02-18 06:38:51.688532 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-18 06:38:51.688547 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-18 06:38:51.688559 | orchestrator | 2026-02-18 06:38:51.688571 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:38:51.688582 | orchestrator | Wednesday 18 February 2026 06:38:17 +0000 (0:00:07.030) 0:47:06.074 **** 2026-02-18 06:38:51.688593 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.688605 | orchestrator | 2026-02-18 06:38:51.688616 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:38:51.688627 | orchestrator | Wednesday 18 February 2026 06:38:18 +0000 (0:00:00.809) 0:47:06.883 **** 2026-02-18 06:38:51.688664 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.688675 | orchestrator | 2026-02-18 06:38:51.688687 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:38:51.688699 | orchestrator | Wednesday 18 February 2026 06:38:18 +0000 (0:00:00.774) 0:47:07.658 **** 2026-02-18 06:38:51.688710 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.688721 | orchestrator | 2026-02-18 06:38:51.688731 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:38:51.688743 | orchestrator | Wednesday 18 February 2026 06:38:19 +0000 (0:00:00.825) 0:47:08.483 **** 2026-02-18 06:38:51.688753 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.688764 | orchestrator | 2026-02-18 06:38:51.688775 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:38:51.688786 | orchestrator | Wednesday 18 February 2026 06:38:20 +0000 (0:00:00.855) 0:47:09.339 **** 2026-02-18 06:38:51.688796 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.688807 | orchestrator | 2026-02-18 06:38:51.688818 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:38:51.688829 | orchestrator | Wednesday 18 February 2026 06:38:21 +0000 (0:00:00.923) 0:47:10.262 **** 2026-02-18 06:38:51.688840 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:51.688887 | orchestrator | 2026-02-18 06:38:51.688898 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:38:51.688909 | orchestrator | Wednesday 18 February 2026 06:38:22 +0000 (0:00:00.915) 0:47:11.178 **** 2026-02-18 06:38:51.688922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:38:51.688935 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:38:51.688948 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:38:51.688960 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.688973 | orchestrator | 2026-02-18 06:38:51.688985 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:38:51.688998 | orchestrator | Wednesday 18 February 2026 06:38:23 +0000 (0:00:01.529) 0:47:12.707 **** 2026-02-18 06:38:51.689011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:38:51.689023 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:38:51.689035 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:38:51.689047 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.689059 | orchestrator | 2026-02-18 06:38:51.689071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:38:51.689083 | orchestrator | Wednesday 18 February 2026 06:38:24 +0000 (0:00:01.084) 0:47:13.792 **** 2026-02-18 06:38:51.689095 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:38:51.689108 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:38:51.689120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:38:51.689133 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.689145 | orchestrator | 2026-02-18 06:38:51.689157 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:38:51.689169 | orchestrator | Wednesday 18 February 2026 06:38:26 +0000 (0:00:01.087) 0:47:14.880 **** 2026-02-18 06:38:51.689181 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:51.689193 | orchestrator | 2026-02-18 06:38:51.689205 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:38:51.689218 | orchestrator | Wednesday 18 February 2026 06:38:26 +0000 (0:00:00.842) 0:47:15.723 **** 2026-02-18 06:38:51.689230 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 06:38:51.689242 | orchestrator | 2026-02-18 06:38:51.689254 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:38:51.689267 | orchestrator | Wednesday 18 February 2026 06:38:27 +0000 (0:00:01.016) 0:47:16.740 **** 2026-02-18 06:38:51.689279 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:51.689298 | orchestrator | 2026-02-18 06:38:51.689324 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-18 06:38:51.689336 | orchestrator | Wednesday 18 February 2026 06:38:29 +0000 (0:00:01.430) 0:47:18.171 **** 2026-02-18 06:38:51.689346 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:51.689357 | orchestrator | 2026-02-18 06:38:51.689386 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-18 06:38:51.689398 | orchestrator | Wednesday 18 February 2026 06:38:30 +0000 (0:00:00.796) 0:47:18.967 **** 2026-02-18 06:38:51.689409 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:38:51.689420 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:38:51.689431 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:38:51.689442 | orchestrator | 2026-02-18 06:38:51.689453 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-18 06:38:51.689463 | orchestrator | Wednesday 18 February 2026 06:38:31 +0000 (0:00:01.657) 0:47:20.625 **** 2026-02-18 06:38:51.689474 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-18 06:38:51.689485 | orchestrator | 2026-02-18 06:38:51.689496 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-18 06:38:51.689507 | orchestrator | Wednesday 18 February 2026 06:38:32 +0000 (0:00:01.239) 0:47:21.865 **** 2026-02-18 06:38:51.689517 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.689528 | orchestrator | 2026-02-18 06:38:51.689539 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-18 06:38:51.689550 | orchestrator | Wednesday 18 February 2026 06:38:34 +0000 (0:00:01.139) 0:47:23.005 **** 2026-02-18 06:38:51.689560 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.689571 | orchestrator | 2026-02-18 06:38:51.689582 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-18 06:38:51.689593 | orchestrator | Wednesday 18 February 2026 06:38:35 +0000 (0:00:01.164) 0:47:24.169 **** 2026-02-18 06:38:51.689603 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:51.689614 | orchestrator | 2026-02-18 06:38:51.689625 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-18 06:38:51.689636 | orchestrator | Wednesday 18 February 2026 06:38:36 +0000 (0:00:01.469) 0:47:25.638 **** 2026-02-18 06:38:51.689647 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:51.689657 | orchestrator | 2026-02-18 06:38:51.689668 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-18 06:38:51.689679 | orchestrator | Wednesday 18 February 2026 06:38:37 +0000 (0:00:01.190) 0:47:26.829 **** 2026-02-18 06:38:51.689690 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-18 06:38:51.689701 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-18 06:38:51.689712 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-18 06:38:51.689723 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-18 06:38:51.689733 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-18 06:38:51.689744 | orchestrator | 2026-02-18 06:38:51.689755 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-18 06:38:51.689766 | orchestrator | Wednesday 18 February 2026 06:38:40 +0000 (0:00:02.514) 0:47:29.344 **** 2026-02-18 06:38:51.689776 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.689787 | orchestrator | 2026-02-18 06:38:51.689798 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-18 06:38:51.689809 | orchestrator | Wednesday 18 February 2026 06:38:41 +0000 (0:00:00.779) 0:47:30.124 **** 2026-02-18 06:38:51.689820 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-18 06:38:51.689838 | orchestrator | 2026-02-18 06:38:51.689877 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-18 06:38:51.689889 | orchestrator | Wednesday 18 February 2026 06:38:42 +0000 (0:00:01.117) 0:47:31.242 **** 2026-02-18 06:38:51.689900 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-18 06:38:51.689911 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-18 06:38:51.689921 | orchestrator | 2026-02-18 06:38:51.689932 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-18 06:38:51.689943 | orchestrator | Wednesday 18 February 2026 06:38:44 +0000 (0:00:01.842) 0:47:33.084 **** 2026-02-18 06:38:51.689954 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:38:51.689964 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 06:38:51.689975 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:38:51.689986 | orchestrator | 2026-02-18 06:38:51.689997 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:38:51.690007 | orchestrator | Wednesday 18 February 2026 06:38:47 +0000 (0:00:03.314) 0:47:36.399 **** 2026-02-18 06:38:51.690081 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-18 06:38:51.690093 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 06:38:51.690104 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:38:51.690115 | orchestrator | 2026-02-18 06:38:51.690126 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-18 06:38:51.690136 | orchestrator | Wednesday 18 February 2026 06:38:49 +0000 (0:00:01.606) 0:47:38.006 **** 2026-02-18 06:38:51.690147 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.690158 | orchestrator | 2026-02-18 06:38:51.690168 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-18 06:38:51.690179 | orchestrator | Wednesday 18 February 2026 06:38:50 +0000 (0:00:00.913) 0:47:38.919 **** 2026-02-18 06:38:51.690190 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.690200 | orchestrator | 2026-02-18 06:38:51.690217 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-18 06:38:51.690228 | orchestrator | Wednesday 18 February 2026 06:38:50 +0000 (0:00:00.832) 0:47:39.752 **** 2026-02-18 06:38:51.690239 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:38:51.690250 | orchestrator | 2026-02-18 06:38:51.690268 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-18 06:41:06.206195 | orchestrator | Wednesday 18 February 2026 06:38:51 +0000 (0:00:00.799) 0:47:40.551 **** 2026-02-18 06:41:06.206315 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-18 06:41:06.206333 | orchestrator | 2026-02-18 06:41:06.206355 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-18 06:41:06.206374 | orchestrator | Wednesday 18 February 2026 06:38:52 +0000 (0:00:01.247) 0:47:41.799 **** 2026-02-18 06:41:06.206392 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:41:06.206421 | orchestrator | 2026-02-18 06:41:06.206442 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-18 06:41:06.206460 | orchestrator | Wednesday 18 February 2026 06:38:54 +0000 (0:00:01.477) 0:47:43.276 **** 2026-02-18 06:41:06.206478 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:41:06.206496 | orchestrator | 2026-02-18 06:41:06.206513 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-18 06:41:06.206532 | orchestrator | Wednesday 18 February 2026 06:38:57 +0000 (0:00:03.334) 0:47:46.611 **** 2026-02-18 06:41:06.206550 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-18 06:41:06.206569 | orchestrator | 2026-02-18 06:41:06.206588 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-18 06:41:06.206606 | orchestrator | Wednesday 18 February 2026 06:38:58 +0000 (0:00:01.147) 0:47:47.758 **** 2026-02-18 06:41:06.206625 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:41:06.206637 | orchestrator | 2026-02-18 06:41:06.206676 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-18 06:41:06.206688 | orchestrator | Wednesday 18 February 2026 06:39:00 +0000 (0:00:01.984) 0:47:49.743 **** 2026-02-18 06:41:06.206698 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:41:06.206709 | orchestrator | 2026-02-18 06:41:06.206720 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-18 06:41:06.206731 | orchestrator | Wednesday 18 February 2026 06:39:02 +0000 (0:00:01.959) 0:47:51.702 **** 2026-02-18 06:41:06.206744 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:41:06.206757 | orchestrator | 2026-02-18 06:41:06.206769 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-18 06:41:06.206782 | orchestrator | Wednesday 18 February 2026 06:39:04 +0000 (0:00:02.164) 0:47:53.867 **** 2026-02-18 06:41:06.206794 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:41:06.206853 | orchestrator | 2026-02-18 06:41:06.206868 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-18 06:41:06.206881 | orchestrator | Wednesday 18 February 2026 06:39:06 +0000 (0:00:01.185) 0:47:55.053 **** 2026-02-18 06:41:06.206893 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:41:06.206905 | orchestrator | 2026-02-18 06:41:06.206918 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-18 06:41:06.206930 | orchestrator | Wednesday 18 February 2026 06:39:07 +0000 (0:00:01.122) 0:47:56.175 **** 2026-02-18 06:41:06.206943 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-18 06:41:06.206956 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-18 06:41:06.206969 | orchestrator | 2026-02-18 06:41:06.206981 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-18 06:41:06.206994 | orchestrator | Wednesday 18 February 2026 06:39:09 +0000 (0:00:01.931) 0:47:58.107 **** 2026-02-18 06:41:06.207007 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-18 06:41:06.207019 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-18 06:41:06.207031 | orchestrator | 2026-02-18 06:41:06.207044 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-18 06:41:06.207058 | orchestrator | Wednesday 18 February 2026 06:39:12 +0000 (0:00:02.927) 0:48:01.035 **** 2026-02-18 06:41:06.207070 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-18 06:41:06.207082 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-18 06:41:06.207093 | orchestrator | 2026-02-18 06:41:06.207104 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-18 06:41:06.207119 | orchestrator | Wednesday 18 February 2026 06:39:17 +0000 (0:00:05.313) 0:48:06.348 **** 2026-02-18 06:41:06.207130 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:41:06.207140 | orchestrator | 2026-02-18 06:41:06.207151 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-18 06:41:06.207162 | orchestrator | Wednesday 18 February 2026 06:39:18 +0000 (0:00:01.320) 0:48:07.668 **** 2026-02-18 06:41:06.207172 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-18 06:41:06.207185 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:41:06.207196 | orchestrator | 2026-02-18 06:41:06.207206 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-18 06:41:06.207217 | orchestrator | Wednesday 18 February 2026 06:39:31 +0000 (0:00:12.954) 0:48:20.623 **** 2026-02-18 06:41:06.207228 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:41:06.207239 | orchestrator | 2026-02-18 06:41:06.207250 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-18 06:41:06.207261 | orchestrator | Wednesday 18 February 2026 06:39:32 +0000 (0:00:00.894) 0:48:21.518 **** 2026-02-18 06:41:06.207272 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:41:06.207283 | orchestrator | 2026-02-18 06:41:06.207293 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-18 06:41:06.207304 | orchestrator | Wednesday 18 February 2026 06:39:33 +0000 (0:00:00.776) 0:48:22.294 **** 2026-02-18 06:41:06.207324 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:41:06.207335 | orchestrator | 2026-02-18 06:41:06.207346 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-18 06:41:06.207372 | orchestrator | Wednesday 18 February 2026 06:39:34 +0000 (0:00:00.792) 0:48:23.087 **** 2026-02-18 06:41:06.207383 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:41:06.207394 | orchestrator | 2026-02-18 06:41:06.207405 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-18 06:41:06.207416 | orchestrator | 2026-02-18 06:41:06.207447 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:41:06.207458 | orchestrator | Wednesday 18 February 2026 06:39:36 +0000 (0:00:02.601) 0:48:25.689 **** 2026-02-18 06:41:06.207469 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:41:06.207480 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:41:06.207491 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:41:06.207502 | orchestrator | 2026-02-18 06:41:06.207513 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:41:06.207523 | orchestrator | Wednesday 18 February 2026 06:39:38 +0000 (0:00:01.686) 0:48:27.375 **** 2026-02-18 06:41:06.207534 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:41:06.207545 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:41:06.207556 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:41:06.207568 | orchestrator | 2026-02-18 06:41:06.207587 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-18 06:41:06.207605 | orchestrator | Wednesday 18 February 2026 06:39:40 +0000 (0:00:01.708) 0:48:29.084 **** 2026-02-18 06:41:06.207624 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-18 06:41:06.207642 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-18 06:41:06.207661 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-18 06:41:06.207679 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-18 06:41:06.207697 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-18 06:41:06.207716 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-18 06:41:06.207735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-18 06:41:06.207753 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-18 06:41:06.207771 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-18 06:41:06.207792 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-18 06:41:06.207838 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-18 06:41:06.207858 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-18 06:41:06.207878 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-18 06:41:06.207891 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-18 06:41:06.207902 | orchestrator | 2026-02-18 06:41:06.207913 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-18 06:41:06.207924 | orchestrator | Wednesday 18 February 2026 06:40:49 +0000 (0:01:09.508) 0:49:38.592 **** 2026-02-18 06:41:06.207935 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-18 06:41:06.207945 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-18 06:41:06.207966 | orchestrator | 2026-02-18 06:41:06.207977 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-18 06:41:06.207988 | orchestrator | Wednesday 18 February 2026 06:40:55 +0000 (0:00:05.641) 0:49:44.234 **** 2026-02-18 06:41:06.207998 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:41:06.208009 | orchestrator | 2026-02-18 06:41:06.208020 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-18 06:41:06.208031 | orchestrator | 2026-02-18 06:41:06.208042 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:41:06.208053 | orchestrator | Wednesday 18 February 2026 06:40:58 +0000 (0:00:03.225) 0:49:47.459 **** 2026-02-18 06:41:06.208064 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-18 06:41:06.208075 | orchestrator | 2026-02-18 06:41:06.208086 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:41:06.208096 | orchestrator | Wednesday 18 February 2026 06:40:59 +0000 (0:00:01.133) 0:49:48.593 **** 2026-02-18 06:41:06.208107 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:06.208118 | orchestrator | 2026-02-18 06:41:06.208129 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:41:06.208139 | orchestrator | Wednesday 18 February 2026 06:41:01 +0000 (0:00:01.476) 0:49:50.070 **** 2026-02-18 06:41:06.208150 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:06.208161 | orchestrator | 2026-02-18 06:41:06.208172 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:41:06.208182 | orchestrator | Wednesday 18 February 2026 06:41:02 +0000 (0:00:01.217) 0:49:51.287 **** 2026-02-18 06:41:06.208193 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:06.208204 | orchestrator | 2026-02-18 06:41:06.208215 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:41:06.208225 | orchestrator | Wednesday 18 February 2026 06:41:03 +0000 (0:00:01.420) 0:49:52.708 **** 2026-02-18 06:41:06.208236 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:06.208247 | orchestrator | 2026-02-18 06:41:06.208266 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:41:06.208277 | orchestrator | Wednesday 18 February 2026 06:41:04 +0000 (0:00:01.135) 0:49:53.844 **** 2026-02-18 06:41:06.208288 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:06.208299 | orchestrator | 2026-02-18 06:41:06.208310 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:41:06.208333 | orchestrator | Wednesday 18 February 2026 06:41:06 +0000 (0:00:01.224) 0:49:55.068 **** 2026-02-18 06:41:32.021843 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:32.021976 | orchestrator | 2026-02-18 06:41:32.021994 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:41:32.022008 | orchestrator | Wednesday 18 February 2026 06:41:07 +0000 (0:00:01.189) 0:49:56.258 **** 2026-02-18 06:41:32.022082 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:32.022095 | orchestrator | 2026-02-18 06:41:32.022107 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:41:32.022119 | orchestrator | Wednesday 18 February 2026 06:41:08 +0000 (0:00:01.163) 0:49:57.422 **** 2026-02-18 06:41:32.022130 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:32.022141 | orchestrator | 2026-02-18 06:41:32.022153 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:41:32.022164 | orchestrator | Wednesday 18 February 2026 06:41:09 +0000 (0:00:01.167) 0:49:58.589 **** 2026-02-18 06:41:32.022175 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:41:32.022187 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:41:32.022198 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:41:32.022208 | orchestrator | 2026-02-18 06:41:32.022219 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:41:32.022230 | orchestrator | Wednesday 18 February 2026 06:41:11 +0000 (0:00:01.717) 0:50:00.306 **** 2026-02-18 06:41:32.022269 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:32.022281 | orchestrator | 2026-02-18 06:41:32.022292 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:41:32.022303 | orchestrator | Wednesday 18 February 2026 06:41:12 +0000 (0:00:01.276) 0:50:01.583 **** 2026-02-18 06:41:32.022314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:41:32.022325 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:41:32.022336 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:41:32.022347 | orchestrator | 2026-02-18 06:41:32.022360 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:41:32.022372 | orchestrator | Wednesday 18 February 2026 06:41:16 +0000 (0:00:03.446) 0:50:05.030 **** 2026-02-18 06:41:32.022385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 06:41:32.022398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 06:41:32.022410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 06:41:32.022422 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:32.022435 | orchestrator | 2026-02-18 06:41:32.022447 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:41:32.022460 | orchestrator | Wednesday 18 February 2026 06:41:17 +0000 (0:00:01.534) 0:50:06.565 **** 2026-02-18 06:41:32.022474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:41:32.022490 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:41:32.022503 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:41:32.022516 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:32.022529 | orchestrator | 2026-02-18 06:41:32.022542 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:41:32.022555 | orchestrator | Wednesday 18 February 2026 06:41:19 +0000 (0:00:02.231) 0:50:08.796 **** 2026-02-18 06:41:32.022570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:32.022587 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:32.022632 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:32.022646 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:32.022671 | orchestrator | 2026-02-18 06:41:32.022684 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:41:32.022697 | orchestrator | Wednesday 18 February 2026 06:41:21 +0000 (0:00:01.247) 0:50:10.043 **** 2026-02-18 06:41:32.022711 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:41:13.256248', 'end': '2026-02-18 06:41:13.306859', 'delta': '0:00:00.050611', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:41:32.022726 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:41:13.858518', 'end': '2026-02-18 06:41:13.913457', 'delta': '0:00:00.054939', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:41:32.022738 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:41:14.826654', 'end': '2026-02-18 06:41:14.875679', 'delta': '0:00:00.049025', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:41:32.022749 | orchestrator | 2026-02-18 06:41:32.022760 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:41:32.022771 | orchestrator | Wednesday 18 February 2026 06:41:22 +0000 (0:00:01.298) 0:50:11.342 **** 2026-02-18 06:41:32.022783 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:32.022795 | orchestrator | 2026-02-18 06:41:32.022830 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:41:32.022842 | orchestrator | Wednesday 18 February 2026 06:41:23 +0000 (0:00:01.238) 0:50:12.581 **** 2026-02-18 06:41:32.022852 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:32.022864 | orchestrator | 2026-02-18 06:41:32.022875 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:41:32.022886 | orchestrator | Wednesday 18 February 2026 06:41:25 +0000 (0:00:01.354) 0:50:13.935 **** 2026-02-18 06:41:32.022897 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:32.022908 | orchestrator | 2026-02-18 06:41:32.022919 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:41:32.022929 | orchestrator | Wednesday 18 February 2026 06:41:26 +0000 (0:00:01.161) 0:50:15.097 **** 2026-02-18 06:41:32.022940 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:32.022951 | orchestrator | 2026-02-18 06:41:32.022962 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:41:32.022973 | orchestrator | Wednesday 18 February 2026 06:41:28 +0000 (0:00:02.059) 0:50:17.156 **** 2026-02-18 06:41:32.022991 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:32.023002 | orchestrator | 2026-02-18 06:41:32.023013 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:41:32.023024 | orchestrator | Wednesday 18 February 2026 06:41:29 +0000 (0:00:01.243) 0:50:18.400 **** 2026-02-18 06:41:32.023035 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:32.023046 | orchestrator | 2026-02-18 06:41:32.023062 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:41:32.023073 | orchestrator | Wednesday 18 February 2026 06:41:30 +0000 (0:00:01.212) 0:50:19.612 **** 2026-02-18 06:41:32.023084 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:32.023095 | orchestrator | 2026-02-18 06:41:32.023106 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:41:32.023125 | orchestrator | Wednesday 18 February 2026 06:41:32 +0000 (0:00:01.270) 0:50:20.882 **** 2026-02-18 06:41:42.806010 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806184 | orchestrator | 2026-02-18 06:41:42.806202 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:41:42.806216 | orchestrator | Wednesday 18 February 2026 06:41:33 +0000 (0:00:01.127) 0:50:22.010 **** 2026-02-18 06:41:42.806227 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806239 | orchestrator | 2026-02-18 06:41:42.806250 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:41:42.806261 | orchestrator | Wednesday 18 February 2026 06:41:34 +0000 (0:00:01.138) 0:50:23.149 **** 2026-02-18 06:41:42.806272 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806283 | orchestrator | 2026-02-18 06:41:42.806294 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:41:42.806304 | orchestrator | Wednesday 18 February 2026 06:41:35 +0000 (0:00:01.220) 0:50:24.370 **** 2026-02-18 06:41:42.806315 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806326 | orchestrator | 2026-02-18 06:41:42.806337 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:41:42.806348 | orchestrator | Wednesday 18 February 2026 06:41:36 +0000 (0:00:01.128) 0:50:25.498 **** 2026-02-18 06:41:42.806358 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806369 | orchestrator | 2026-02-18 06:41:42.806380 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:41:42.806391 | orchestrator | Wednesday 18 February 2026 06:41:37 +0000 (0:00:01.170) 0:50:26.670 **** 2026-02-18 06:41:42.806402 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806413 | orchestrator | 2026-02-18 06:41:42.806424 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:41:42.806435 | orchestrator | Wednesday 18 February 2026 06:41:39 +0000 (0:00:01.272) 0:50:27.942 **** 2026-02-18 06:41:42.806446 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806457 | orchestrator | 2026-02-18 06:41:42.806468 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:41:42.806478 | orchestrator | Wednesday 18 February 2026 06:41:40 +0000 (0:00:01.134) 0:50:29.076 **** 2026-02-18 06:41:42.806492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:41:42.806576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:41:42.806678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:41:42.806704 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:42.806716 | orchestrator | 2026-02-18 06:41:42.806728 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:41:42.806740 | orchestrator | Wednesday 18 February 2026 06:41:41 +0000 (0:00:01.320) 0:50:30.397 **** 2026-02-18 06:41:42.806767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.021976 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022093 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022103 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022127 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022133 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022167 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022190 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ab2d03ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab2d03ed-cd4a-48d1-b8d2-d00a10f41162-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022201 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022206 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:41:47.022212 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:41:47.022218 | orchestrator | 2026-02-18 06:41:47.022224 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:41:47.022230 | orchestrator | Wednesday 18 February 2026 06:41:42 +0000 (0:00:01.275) 0:50:31.673 **** 2026-02-18 06:41:47.022235 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:47.022241 | orchestrator | 2026-02-18 06:41:47.022249 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:41:47.022254 | orchestrator | Wednesday 18 February 2026 06:41:44 +0000 (0:00:01.541) 0:50:33.214 **** 2026-02-18 06:41:47.022259 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:47.022264 | orchestrator | 2026-02-18 06:41:47.022268 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:41:47.022273 | orchestrator | Wednesday 18 February 2026 06:41:45 +0000 (0:00:01.171) 0:50:34.385 **** 2026-02-18 06:41:47.022278 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:41:47.022283 | orchestrator | 2026-02-18 06:41:47.022288 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:41:47.022296 | orchestrator | Wednesday 18 February 2026 06:41:47 +0000 (0:00:01.507) 0:50:35.892 **** 2026-02-18 06:42:44.313106 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:42:44.313226 | orchestrator | 2026-02-18 06:42:44.313242 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:42:44.313255 | orchestrator | Wednesday 18 February 2026 06:41:48 +0000 (0:00:01.151) 0:50:37.044 **** 2026-02-18 06:42:44.313266 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:42:44.313277 | orchestrator | 2026-02-18 06:42:44.313288 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:42:44.313299 | orchestrator | Wednesday 18 February 2026 06:41:49 +0000 (0:00:01.250) 0:50:38.295 **** 2026-02-18 06:42:44.313310 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:42:44.313321 | orchestrator | 2026-02-18 06:42:44.313332 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:42:44.313342 | orchestrator | Wednesday 18 February 2026 06:41:50 +0000 (0:00:01.195) 0:50:39.490 **** 2026-02-18 06:42:44.313380 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:42:44.313392 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-18 06:42:44.313403 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-18 06:42:44.313414 | orchestrator | 2026-02-18 06:42:44.313424 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:42:44.313435 | orchestrator | Wednesday 18 February 2026 06:41:52 +0000 (0:00:02.126) 0:50:41.616 **** 2026-02-18 06:42:44.313446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-18 06:42:44.313457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-18 06:42:44.313468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-18 06:42:44.313479 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:42:44.313490 | orchestrator | 2026-02-18 06:42:44.313501 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:42:44.313512 | orchestrator | Wednesday 18 February 2026 06:41:53 +0000 (0:00:01.235) 0:50:42.852 **** 2026-02-18 06:42:44.313522 | orchestrator | skipping: [testbed-node-0] 2026-02-18 06:42:44.313533 | orchestrator | 2026-02-18 06:42:44.313543 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:42:44.313554 | orchestrator | Wednesday 18 February 2026 06:41:55 +0000 (0:00:01.269) 0:50:44.121 **** 2026-02-18 06:42:44.313565 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:42:44.313576 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:42:44.313587 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:42:44.313597 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:42:44.313608 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:42:44.313619 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:42:44.313629 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:42:44.313640 | orchestrator | 2026-02-18 06:42:44.313651 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:42:44.313661 | orchestrator | Wednesday 18 February 2026 06:41:57 +0000 (0:00:02.293) 0:50:46.415 **** 2026-02-18 06:42:44.313672 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-18 06:42:44.313682 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:42:44.313693 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:42:44.313703 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:42:44.313714 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:42:44.313724 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:42:44.313735 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:42:44.313745 | orchestrator | 2026-02-18 06:42:44.313756 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-18 06:42:44.313766 | orchestrator | Wednesday 18 February 2026 06:42:00 +0000 (0:00:02.725) 0:50:49.141 **** 2026-02-18 06:42:44.313777 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:42:44.313788 | orchestrator | 2026-02-18 06:42:44.313860 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-18 06:42:44.313873 | orchestrator | Wednesday 18 February 2026 06:42:03 +0000 (0:00:03.125) 0:50:52.267 **** 2026-02-18 06:42:44.313884 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:42:44.313894 | orchestrator | 2026-02-18 06:42:44.313905 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-18 06:42:44.313916 | orchestrator | Wednesday 18 February 2026 06:42:06 +0000 (0:00:02.857) 0:50:55.124 **** 2026-02-18 06:42:44.313937 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:42:44.313948 | orchestrator | 2026-02-18 06:42:44.313959 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-18 06:42:44.313985 | orchestrator | Wednesday 18 February 2026 06:42:08 +0000 (0:00:02.237) 0:50:57.362 **** 2026-02-18 06:42:44.314083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4691', 'value': {'gid': 4691, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 7, 'state': 'up:active', 'state_seq': 1249, 'addr': '192.168.16.15:6817/648769410', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 648769410}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 648769410}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-18 06:42:44.314102 | orchestrator | 2026-02-18 06:42:44.314113 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-18 06:42:44.314125 | orchestrator | Wednesday 18 February 2026 06:42:09 +0000 (0:00:01.174) 0:50:58.537 **** 2026-02-18 06:42:44.314135 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-18 06:42:44.314146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-18 06:42:44.314157 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-02-18 06:42:44.314167 | orchestrator | 2026-02-18 06:42:44.314178 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-18 06:42:44.314189 | orchestrator | Wednesday 18 February 2026 06:42:11 +0000 (0:00:01.652) 0:51:00.190 **** 2026-02-18 06:42:44.314200 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-18 06:42:44.314210 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-18 06:42:44.314221 | orchestrator | 2026-02-18 06:42:44.314232 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-18 06:42:44.314243 | orchestrator | Wednesday 18 February 2026 06:42:12 +0000 (0:00:01.517) 0:51:01.707 **** 2026-02-18 06:42:44.314253 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:42:44.314264 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:42:44.314275 | orchestrator | 2026-02-18 06:42:44.314285 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-18 06:42:44.314296 | orchestrator | Wednesday 18 February 2026 06:42:24 +0000 (0:00:11.833) 0:51:13.540 **** 2026-02-18 06:42:44.314307 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:42:44.314317 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:42:44.314328 | orchestrator | 2026-02-18 06:42:44.314339 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-18 06:42:44.314349 | orchestrator | Wednesday 18 February 2026 06:42:28 +0000 (0:00:04.086) 0:51:17.627 **** 2026-02-18 06:42:44.314360 | orchestrator | ok: [testbed-node-0] 2026-02-18 06:42:44.314371 | orchestrator | 2026-02-18 06:42:44.314382 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-18 06:42:44.314392 | orchestrator | Wednesday 18 February 2026 06:42:30 +0000 (0:00:02.136) 0:51:19.764 **** 2026-02-18 06:42:44.314403 | orchestrator | changed: [testbed-node-0] 2026-02-18 06:42:44.314414 | orchestrator | 2026-02-18 06:42:44.314425 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-18 06:42:44.314435 | orchestrator | 2026-02-18 06:42:44.314446 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:42:44.314465 | orchestrator | Wednesday 18 February 2026 06:42:33 +0000 (0:00:02.263) 0:51:22.028 **** 2026-02-18 06:42:44.314476 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-18 06:42:44.314486 | orchestrator | 2026-02-18 06:42:44.314497 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:42:44.314508 | orchestrator | Wednesday 18 February 2026 06:42:34 +0000 (0:00:01.120) 0:51:23.148 **** 2026-02-18 06:42:44.314519 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:42:44.314529 | orchestrator | 2026-02-18 06:42:44.314540 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:42:44.314551 | orchestrator | Wednesday 18 February 2026 06:42:35 +0000 (0:00:01.456) 0:51:24.605 **** 2026-02-18 06:42:44.314562 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:42:44.314573 | orchestrator | 2026-02-18 06:42:44.314583 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:42:44.314594 | orchestrator | Wednesday 18 February 2026 06:42:36 +0000 (0:00:01.184) 0:51:25.790 **** 2026-02-18 06:42:44.314605 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:42:44.314616 | orchestrator | 2026-02-18 06:42:44.314627 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:42:44.314637 | orchestrator | Wednesday 18 February 2026 06:42:38 +0000 (0:00:01.519) 0:51:27.310 **** 2026-02-18 06:42:44.314648 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:42:44.314659 | orchestrator | 2026-02-18 06:42:44.314669 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:42:44.314680 | orchestrator | Wednesday 18 February 2026 06:42:39 +0000 (0:00:01.187) 0:51:28.498 **** 2026-02-18 06:42:44.314691 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:42:44.314702 | orchestrator | 2026-02-18 06:42:44.314712 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:42:44.314729 | orchestrator | Wednesday 18 February 2026 06:42:40 +0000 (0:00:01.237) 0:51:29.736 **** 2026-02-18 06:42:44.314740 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:42:44.314751 | orchestrator | 2026-02-18 06:42:44.314762 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:42:44.314772 | orchestrator | Wednesday 18 February 2026 06:42:41 +0000 (0:00:01.133) 0:51:30.869 **** 2026-02-18 06:42:44.314783 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:42:44.314836 | orchestrator | 2026-02-18 06:42:44.314848 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:42:44.314859 | orchestrator | Wednesday 18 February 2026 06:42:43 +0000 (0:00:01.152) 0:51:32.022 **** 2026-02-18 06:42:44.314870 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:42:44.314881 | orchestrator | 2026-02-18 06:42:44.314900 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:43:09.777376 | orchestrator | Wednesday 18 February 2026 06:42:44 +0000 (0:00:01.150) 0:51:33.173 **** 2026-02-18 06:43:09.777519 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:43:09.777538 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:43:09.777550 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:43:09.777562 | orchestrator | 2026-02-18 06:43:09.777574 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:43:09.777585 | orchestrator | Wednesday 18 February 2026 06:42:46 +0000 (0:00:02.105) 0:51:35.279 **** 2026-02-18 06:43:09.777596 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:09.777608 | orchestrator | 2026-02-18 06:43:09.777619 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:43:09.777630 | orchestrator | Wednesday 18 February 2026 06:42:47 +0000 (0:00:01.292) 0:51:36.572 **** 2026-02-18 06:43:09.777641 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:43:09.777652 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:43:09.777690 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:43:09.777702 | orchestrator | 2026-02-18 06:43:09.777713 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:43:09.777723 | orchestrator | Wednesday 18 February 2026 06:42:50 +0000 (0:00:03.263) 0:51:39.835 **** 2026-02-18 06:43:09.777735 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 06:43:09.777746 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 06:43:09.777757 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 06:43:09.777768 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.777779 | orchestrator | 2026-02-18 06:43:09.777790 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:43:09.777827 | orchestrator | Wednesday 18 February 2026 06:42:52 +0000 (0:00:01.904) 0:51:41.739 **** 2026-02-18 06:43:09.777839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:43:09.777854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:43:09.777866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:43:09.777877 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.777888 | orchestrator | 2026-02-18 06:43:09.777899 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:43:09.777909 | orchestrator | Wednesday 18 February 2026 06:42:54 +0000 (0:00:01.696) 0:51:43.436 **** 2026-02-18 06:43:09.777925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:09.777941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:09.777968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:09.777981 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.777994 | orchestrator | 2026-02-18 06:43:09.778006 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:43:09.778075 | orchestrator | Wednesday 18 February 2026 06:42:55 +0000 (0:00:01.159) 0:51:44.596 **** 2026-02-18 06:43:09.778107 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:42:48.668673', 'end': '2026-02-18 06:42:48.719430', 'delta': '0:00:00.050757', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:43:09.778132 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:42:49.207937', 'end': '2026-02-18 06:42:49.260461', 'delta': '0:00:00.052524', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:43:09.778143 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:42:49.740660', 'end': '2026-02-18 06:42:49.784133', 'delta': '0:00:00.043473', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:43:09.778155 | orchestrator | 2026-02-18 06:43:09.778166 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:43:09.778177 | orchestrator | Wednesday 18 February 2026 06:42:56 +0000 (0:00:01.201) 0:51:45.798 **** 2026-02-18 06:43:09.778188 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:09.778199 | orchestrator | 2026-02-18 06:43:09.778209 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:43:09.778221 | orchestrator | Wednesday 18 February 2026 06:42:58 +0000 (0:00:01.278) 0:51:47.076 **** 2026-02-18 06:43:09.778232 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.778242 | orchestrator | 2026-02-18 06:43:09.778253 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:43:09.778264 | orchestrator | Wednesday 18 February 2026 06:42:59 +0000 (0:00:01.287) 0:51:48.364 **** 2026-02-18 06:43:09.778275 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:09.778286 | orchestrator | 2026-02-18 06:43:09.778297 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:43:09.778308 | orchestrator | Wednesday 18 February 2026 06:43:00 +0000 (0:00:01.171) 0:51:49.535 **** 2026-02-18 06:43:09.778319 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:43:09.778330 | orchestrator | 2026-02-18 06:43:09.778341 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:43:09.778352 | orchestrator | Wednesday 18 February 2026 06:43:02 +0000 (0:00:02.018) 0:51:51.554 **** 2026-02-18 06:43:09.778363 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:09.778374 | orchestrator | 2026-02-18 06:43:09.778385 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:43:09.778396 | orchestrator | Wednesday 18 February 2026 06:43:03 +0000 (0:00:01.156) 0:51:52.711 **** 2026-02-18 06:43:09.778407 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.778417 | orchestrator | 2026-02-18 06:43:09.778428 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:43:09.778445 | orchestrator | Wednesday 18 February 2026 06:43:05 +0000 (0:00:01.179) 0:51:53.890 **** 2026-02-18 06:43:09.778457 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.778468 | orchestrator | 2026-02-18 06:43:09.778484 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:43:09.778496 | orchestrator | Wednesday 18 February 2026 06:43:06 +0000 (0:00:01.242) 0:51:55.133 **** 2026-02-18 06:43:09.778507 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.778517 | orchestrator | 2026-02-18 06:43:09.778528 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:43:09.778539 | orchestrator | Wednesday 18 February 2026 06:43:07 +0000 (0:00:01.138) 0:51:56.272 **** 2026-02-18 06:43:09.778550 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:09.778561 | orchestrator | 2026-02-18 06:43:09.778572 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:43:09.778583 | orchestrator | Wednesday 18 February 2026 06:43:08 +0000 (0:00:01.126) 0:51:57.399 **** 2026-02-18 06:43:09.778600 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:14.682420 | orchestrator | 2026-02-18 06:43:14.682532 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:43:14.682547 | orchestrator | Wednesday 18 February 2026 06:43:09 +0000 (0:00:01.245) 0:51:58.645 **** 2026-02-18 06:43:14.682558 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:14.682569 | orchestrator | 2026-02-18 06:43:14.682579 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:43:14.682589 | orchestrator | Wednesday 18 February 2026 06:43:10 +0000 (0:00:01.196) 0:51:59.841 **** 2026-02-18 06:43:14.682599 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:14.682610 | orchestrator | 2026-02-18 06:43:14.682620 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:43:14.682630 | orchestrator | Wednesday 18 February 2026 06:43:12 +0000 (0:00:01.178) 0:52:01.020 **** 2026-02-18 06:43:14.682640 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:14.682650 | orchestrator | 2026-02-18 06:43:14.682659 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:43:14.682669 | orchestrator | Wednesday 18 February 2026 06:43:13 +0000 (0:00:01.145) 0:52:02.166 **** 2026-02-18 06:43:14.682685 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:14.682701 | orchestrator | 2026-02-18 06:43:14.682716 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:43:14.682732 | orchestrator | Wednesday 18 February 2026 06:43:14 +0000 (0:00:01.160) 0:52:03.327 **** 2026-02-18 06:43:14.682751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:14.682772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}})  2026-02-18 06:43:14.682791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:43:14.682987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}})  2026-02-18 06:43:14.683011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:14.683054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:14.683076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:43:14.683095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:14.683112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:43:14.683130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:14.683160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}})  2026-02-18 06:43:14.683189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}})  2026-02-18 06:43:14.683222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:16.069744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:43:16.069949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:16.069971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:43:16.069997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:43:16.070010 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:16.070081 | orchestrator | 2026-02-18 06:43:16.070093 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:43:16.070104 | orchestrator | Wednesday 18 February 2026 06:43:15 +0000 (0:00:01.364) 0:52:04.691 **** 2026-02-18 06:43:16.070164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:16.070179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:16.070191 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:16.070240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:16.070260 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:16.070281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.349753 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.349908 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.349950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.349963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.349976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.350009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.350084 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.350154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.350171 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:17.350193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:53.104204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:43:53.104360 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.104379 | orchestrator | 2026-02-18 06:43:53.104393 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:43:53.104405 | orchestrator | Wednesday 18 February 2026 06:43:17 +0000 (0:00:01.525) 0:52:06.217 **** 2026-02-18 06:43:53.104417 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:53.104429 | orchestrator | 2026-02-18 06:43:53.104440 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:43:53.104451 | orchestrator | Wednesday 18 February 2026 06:43:18 +0000 (0:00:01.591) 0:52:07.808 **** 2026-02-18 06:43:53.104461 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:53.104472 | orchestrator | 2026-02-18 06:43:53.104483 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:43:53.104494 | orchestrator | Wednesday 18 February 2026 06:43:20 +0000 (0:00:01.146) 0:52:08.955 **** 2026-02-18 06:43:53.104505 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:53.104515 | orchestrator | 2026-02-18 06:43:53.104526 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:43:53.104537 | orchestrator | Wednesday 18 February 2026 06:43:21 +0000 (0:00:01.470) 0:52:10.426 **** 2026-02-18 06:43:53.104548 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.104558 | orchestrator | 2026-02-18 06:43:53.104569 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:43:53.104580 | orchestrator | Wednesday 18 February 2026 06:43:22 +0000 (0:00:01.148) 0:52:11.575 **** 2026-02-18 06:43:53.104590 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.104601 | orchestrator | 2026-02-18 06:43:53.104612 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:43:53.104623 | orchestrator | Wednesday 18 February 2026 06:43:24 +0000 (0:00:01.313) 0:52:12.888 **** 2026-02-18 06:43:53.104633 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.104644 | orchestrator | 2026-02-18 06:43:53.104655 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:43:53.104666 | orchestrator | Wednesday 18 February 2026 06:43:25 +0000 (0:00:01.217) 0:52:14.105 **** 2026-02-18 06:43:53.104676 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-18 06:43:53.104687 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-18 06:43:53.104698 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-18 06:43:53.104709 | orchestrator | 2026-02-18 06:43:53.104723 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:43:53.104735 | orchestrator | Wednesday 18 February 2026 06:43:27 +0000 (0:00:02.246) 0:52:16.351 **** 2026-02-18 06:43:53.104747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 06:43:53.104760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 06:43:53.104785 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 06:43:53.104822 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.104834 | orchestrator | 2026-02-18 06:43:53.104846 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:43:53.104859 | orchestrator | Wednesday 18 February 2026 06:43:28 +0000 (0:00:01.250) 0:52:17.602 **** 2026-02-18 06:43:53.104871 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-18 06:43:53.104884 | orchestrator | 2026-02-18 06:43:53.104897 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:43:53.104911 | orchestrator | Wednesday 18 February 2026 06:43:29 +0000 (0:00:01.155) 0:52:18.757 **** 2026-02-18 06:43:53.104932 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.104944 | orchestrator | 2026-02-18 06:43:53.104957 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:43:53.104969 | orchestrator | Wednesday 18 February 2026 06:43:31 +0000 (0:00:01.209) 0:52:19.967 **** 2026-02-18 06:43:53.104982 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.104994 | orchestrator | 2026-02-18 06:43:53.105006 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:43:53.105018 | orchestrator | Wednesday 18 February 2026 06:43:32 +0000 (0:00:01.200) 0:52:21.167 **** 2026-02-18 06:43:53.105031 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.105043 | orchestrator | 2026-02-18 06:43:53.105055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:43:53.105068 | orchestrator | Wednesday 18 February 2026 06:43:33 +0000 (0:00:01.159) 0:52:22.327 **** 2026-02-18 06:43:53.105079 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:53.105090 | orchestrator | 2026-02-18 06:43:53.105101 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:43:53.105112 | orchestrator | Wednesday 18 February 2026 06:43:34 +0000 (0:00:01.396) 0:52:23.723 **** 2026-02-18 06:43:53.105123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:43:53.105152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:43:53.105163 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:43:53.105174 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.105185 | orchestrator | 2026-02-18 06:43:53.105195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:43:53.105206 | orchestrator | Wednesday 18 February 2026 06:43:36 +0000 (0:00:01.431) 0:52:25.154 **** 2026-02-18 06:43:53.105217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:43:53.105228 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:43:53.105238 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:43:53.105248 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.105259 | orchestrator | 2026-02-18 06:43:53.105270 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:43:53.105280 | orchestrator | Wednesday 18 February 2026 06:43:37 +0000 (0:00:01.435) 0:52:26.590 **** 2026-02-18 06:43:53.105291 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:43:53.105302 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:43:53.105312 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:43:53.105323 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.105334 | orchestrator | 2026-02-18 06:43:53.105344 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:43:53.105355 | orchestrator | Wednesday 18 February 2026 06:43:39 +0000 (0:00:01.396) 0:52:27.987 **** 2026-02-18 06:43:53.105366 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:53.105376 | orchestrator | 2026-02-18 06:43:53.105387 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:43:53.105398 | orchestrator | Wednesday 18 February 2026 06:43:40 +0000 (0:00:01.114) 0:52:29.102 **** 2026-02-18 06:43:53.105409 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 06:43:53.105419 | orchestrator | 2026-02-18 06:43:53.105430 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:43:53.105441 | orchestrator | Wednesday 18 February 2026 06:43:41 +0000 (0:00:01.699) 0:52:30.802 **** 2026-02-18 06:43:53.105452 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:43:53.105462 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:43:53.105473 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:43:53.105484 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:43:53.105501 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:43:53.105512 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-18 06:43:53.105523 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:43:53.105533 | orchestrator | 2026-02-18 06:43:53.105544 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:43:53.105555 | orchestrator | Wednesday 18 February 2026 06:43:44 +0000 (0:00:02.264) 0:52:33.067 **** 2026-02-18 06:43:53.105565 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:43:53.105576 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:43:53.105587 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:43:53.105597 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:43:53.105613 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:43:53.105624 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-18 06:43:53.105635 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:43:53.105646 | orchestrator | 2026-02-18 06:43:53.105656 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-18 06:43:53.105667 | orchestrator | Wednesday 18 February 2026 06:43:46 +0000 (0:00:02.704) 0:52:35.771 **** 2026-02-18 06:43:53.105678 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.105688 | orchestrator | 2026-02-18 06:43:53.105699 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:43:53.105710 | orchestrator | Wednesday 18 February 2026 06:43:48 +0000 (0:00:01.124) 0:52:36.896 **** 2026-02-18 06:43:53.105721 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-18 06:43:53.105731 | orchestrator | 2026-02-18 06:43:53.105742 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:43:53.105753 | orchestrator | Wednesday 18 February 2026 06:43:49 +0000 (0:00:01.267) 0:52:38.164 **** 2026-02-18 06:43:53.105764 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-18 06:43:53.105775 | orchestrator | 2026-02-18 06:43:53.105785 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:43:53.105835 | orchestrator | Wednesday 18 February 2026 06:43:50 +0000 (0:00:01.160) 0:52:39.324 **** 2026-02-18 06:43:53.105847 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:43:53.105858 | orchestrator | 2026-02-18 06:43:53.105869 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:43:53.105879 | orchestrator | Wednesday 18 February 2026 06:43:51 +0000 (0:00:01.124) 0:52:40.448 **** 2026-02-18 06:43:53.105890 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:43:53.105901 | orchestrator | 2026-02-18 06:43:53.105912 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:43:53.105955 | orchestrator | Wednesday 18 February 2026 06:43:53 +0000 (0:00:01.521) 0:52:41.969 **** 2026-02-18 06:44:44.657285 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.657394 | orchestrator | 2026-02-18 06:44:44.657410 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:44:44.657423 | orchestrator | Wednesday 18 February 2026 06:43:54 +0000 (0:00:01.496) 0:52:43.466 **** 2026-02-18 06:44:44.657435 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.657446 | orchestrator | 2026-02-18 06:44:44.657457 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:44:44.657468 | orchestrator | Wednesday 18 February 2026 06:43:56 +0000 (0:00:01.547) 0:52:45.013 **** 2026-02-18 06:44:44.657479 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.657514 | orchestrator | 2026-02-18 06:44:44.657527 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:44:44.657538 | orchestrator | Wednesday 18 February 2026 06:43:57 +0000 (0:00:01.133) 0:52:46.147 **** 2026-02-18 06:44:44.657548 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.657559 | orchestrator | 2026-02-18 06:44:44.657570 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:44:44.657581 | orchestrator | Wednesday 18 February 2026 06:43:58 +0000 (0:00:01.154) 0:52:47.301 **** 2026-02-18 06:44:44.657592 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.657603 | orchestrator | 2026-02-18 06:44:44.657613 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:44:44.657624 | orchestrator | Wednesday 18 February 2026 06:43:59 +0000 (0:00:01.188) 0:52:48.490 **** 2026-02-18 06:44:44.657635 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.657646 | orchestrator | 2026-02-18 06:44:44.657656 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:44:44.657667 | orchestrator | Wednesday 18 February 2026 06:44:01 +0000 (0:00:01.582) 0:52:50.073 **** 2026-02-18 06:44:44.657678 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.657688 | orchestrator | 2026-02-18 06:44:44.657699 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:44:44.657710 | orchestrator | Wednesday 18 February 2026 06:44:02 +0000 (0:00:01.495) 0:52:51.568 **** 2026-02-18 06:44:44.657721 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.657731 | orchestrator | 2026-02-18 06:44:44.657742 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:44:44.657753 | orchestrator | Wednesday 18 February 2026 06:44:03 +0000 (0:00:01.121) 0:52:52.689 **** 2026-02-18 06:44:44.657764 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.657775 | orchestrator | 2026-02-18 06:44:44.657786 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:44:44.657822 | orchestrator | Wednesday 18 February 2026 06:44:04 +0000 (0:00:01.150) 0:52:53.840 **** 2026-02-18 06:44:44.657835 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.657847 | orchestrator | 2026-02-18 06:44:44.657860 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:44:44.657872 | orchestrator | Wednesday 18 February 2026 06:44:06 +0000 (0:00:01.158) 0:52:54.999 **** 2026-02-18 06:44:44.657884 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.657897 | orchestrator | 2026-02-18 06:44:44.657910 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:44:44.657923 | orchestrator | Wednesday 18 February 2026 06:44:07 +0000 (0:00:01.171) 0:52:56.170 **** 2026-02-18 06:44:44.657935 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.657947 | orchestrator | 2026-02-18 06:44:44.657960 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:44:44.657972 | orchestrator | Wednesday 18 February 2026 06:44:08 +0000 (0:00:01.139) 0:52:57.310 **** 2026-02-18 06:44:44.657985 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.657997 | orchestrator | 2026-02-18 06:44:44.658008 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:44:44.658079 | orchestrator | Wednesday 18 February 2026 06:44:09 +0000 (0:00:01.143) 0:52:58.453 **** 2026-02-18 06:44:44.658091 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658102 | orchestrator | 2026-02-18 06:44:44.658127 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:44:44.658138 | orchestrator | Wednesday 18 February 2026 06:44:10 +0000 (0:00:01.186) 0:52:59.640 **** 2026-02-18 06:44:44.658149 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658160 | orchestrator | 2026-02-18 06:44:44.658171 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:44:44.658182 | orchestrator | Wednesday 18 February 2026 06:44:11 +0000 (0:00:01.151) 0:53:00.791 **** 2026-02-18 06:44:44.658193 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.658204 | orchestrator | 2026-02-18 06:44:44.658223 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:44:44.658235 | orchestrator | Wednesday 18 February 2026 06:44:13 +0000 (0:00:01.157) 0:53:01.949 **** 2026-02-18 06:44:44.658245 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.658257 | orchestrator | 2026-02-18 06:44:44.658267 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:44:44.658278 | orchestrator | Wednesday 18 February 2026 06:44:14 +0000 (0:00:01.326) 0:53:03.276 **** 2026-02-18 06:44:44.658289 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658300 | orchestrator | 2026-02-18 06:44:44.658311 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:44:44.658322 | orchestrator | Wednesday 18 February 2026 06:44:15 +0000 (0:00:01.219) 0:53:04.496 **** 2026-02-18 06:44:44.658333 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658344 | orchestrator | 2026-02-18 06:44:44.658355 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:44:44.658366 | orchestrator | Wednesday 18 February 2026 06:44:16 +0000 (0:00:01.268) 0:53:05.764 **** 2026-02-18 06:44:44.658377 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658388 | orchestrator | 2026-02-18 06:44:44.658399 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:44:44.658409 | orchestrator | Wednesday 18 February 2026 06:44:18 +0000 (0:00:01.152) 0:53:06.917 **** 2026-02-18 06:44:44.658421 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658432 | orchestrator | 2026-02-18 06:44:44.658443 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:44:44.658471 | orchestrator | Wednesday 18 February 2026 06:44:19 +0000 (0:00:01.184) 0:53:08.102 **** 2026-02-18 06:44:44.658482 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658493 | orchestrator | 2026-02-18 06:44:44.658504 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:44:44.658516 | orchestrator | Wednesday 18 February 2026 06:44:20 +0000 (0:00:01.232) 0:53:09.334 **** 2026-02-18 06:44:44.658526 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658537 | orchestrator | 2026-02-18 06:44:44.658548 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:44:44.658559 | orchestrator | Wednesday 18 February 2026 06:44:21 +0000 (0:00:01.154) 0:53:10.489 **** 2026-02-18 06:44:44.658570 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658581 | orchestrator | 2026-02-18 06:44:44.658593 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:44:44.658605 | orchestrator | Wednesday 18 February 2026 06:44:22 +0000 (0:00:01.182) 0:53:11.671 **** 2026-02-18 06:44:44.658616 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658627 | orchestrator | 2026-02-18 06:44:44.658638 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:44:44.658649 | orchestrator | Wednesday 18 February 2026 06:44:23 +0000 (0:00:01.141) 0:53:12.813 **** 2026-02-18 06:44:44.658659 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658670 | orchestrator | 2026-02-18 06:44:44.658681 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:44:44.658692 | orchestrator | Wednesday 18 February 2026 06:44:25 +0000 (0:00:01.242) 0:53:14.055 **** 2026-02-18 06:44:44.658703 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658714 | orchestrator | 2026-02-18 06:44:44.658725 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:44:44.658736 | orchestrator | Wednesday 18 February 2026 06:44:26 +0000 (0:00:01.133) 0:53:15.189 **** 2026-02-18 06:44:44.658747 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658757 | orchestrator | 2026-02-18 06:44:44.658768 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:44:44.658779 | orchestrator | Wednesday 18 February 2026 06:44:27 +0000 (0:00:01.206) 0:53:16.395 **** 2026-02-18 06:44:44.658790 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.658826 | orchestrator | 2026-02-18 06:44:44.658838 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:44:44.658848 | orchestrator | Wednesday 18 February 2026 06:44:28 +0000 (0:00:01.347) 0:53:17.742 **** 2026-02-18 06:44:44.658859 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.658870 | orchestrator | 2026-02-18 06:44:44.658881 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:44:44.658892 | orchestrator | Wednesday 18 February 2026 06:44:30 +0000 (0:00:01.994) 0:53:19.737 **** 2026-02-18 06:44:44.658903 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.658914 | orchestrator | 2026-02-18 06:44:44.658925 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:44:44.658935 | orchestrator | Wednesday 18 February 2026 06:44:33 +0000 (0:00:02.191) 0:53:21.929 **** 2026-02-18 06:44:44.658946 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-18 06:44:44.658959 | orchestrator | 2026-02-18 06:44:44.658970 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:44:44.658981 | orchestrator | Wednesday 18 February 2026 06:44:34 +0000 (0:00:01.170) 0:53:23.100 **** 2026-02-18 06:44:44.658992 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.659003 | orchestrator | 2026-02-18 06:44:44.659014 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:44:44.659024 | orchestrator | Wednesday 18 February 2026 06:44:35 +0000 (0:00:01.142) 0:53:24.242 **** 2026-02-18 06:44:44.659035 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.659046 | orchestrator | 2026-02-18 06:44:44.659062 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:44:44.659073 | orchestrator | Wednesday 18 February 2026 06:44:36 +0000 (0:00:01.150) 0:53:25.393 **** 2026-02-18 06:44:44.659084 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:44:44.659095 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:44:44.659106 | orchestrator | 2026-02-18 06:44:44.659117 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:44:44.659128 | orchestrator | Wednesday 18 February 2026 06:44:38 +0000 (0:00:01.836) 0:53:27.229 **** 2026-02-18 06:44:44.659139 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:44:44.659149 | orchestrator | 2026-02-18 06:44:44.659160 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:44:44.659171 | orchestrator | Wednesday 18 February 2026 06:44:39 +0000 (0:00:01.458) 0:53:28.688 **** 2026-02-18 06:44:44.659182 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.659193 | orchestrator | 2026-02-18 06:44:44.659204 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:44:44.659215 | orchestrator | Wednesday 18 February 2026 06:44:40 +0000 (0:00:01.174) 0:53:29.863 **** 2026-02-18 06:44:44.659226 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.659237 | orchestrator | 2026-02-18 06:44:44.659248 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:44:44.659259 | orchestrator | Wednesday 18 February 2026 06:44:42 +0000 (0:00:01.165) 0:53:31.028 **** 2026-02-18 06:44:44.659269 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:44:44.659280 | orchestrator | 2026-02-18 06:44:44.659291 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:44:44.659302 | orchestrator | Wednesday 18 February 2026 06:44:43 +0000 (0:00:01.230) 0:53:32.259 **** 2026-02-18 06:44:44.659313 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-18 06:44:44.659324 | orchestrator | 2026-02-18 06:44:44.659335 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:44:44.659353 | orchestrator | Wednesday 18 February 2026 06:44:44 +0000 (0:00:01.264) 0:53:33.523 **** 2026-02-18 06:45:31.874219 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:45:31.874310 | orchestrator | 2026-02-18 06:45:31.874339 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:45:31.874347 | orchestrator | Wednesday 18 February 2026 06:44:46 +0000 (0:00:01.779) 0:53:35.303 **** 2026-02-18 06:45:31.874355 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:45:31.874361 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:45:31.874367 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:45:31.874393 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874400 | orchestrator | 2026-02-18 06:45:31.874406 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:45:31.874413 | orchestrator | Wednesday 18 February 2026 06:44:47 +0000 (0:00:01.215) 0:53:36.518 **** 2026-02-18 06:45:31.874419 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874425 | orchestrator | 2026-02-18 06:45:31.874431 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:45:31.874437 | orchestrator | Wednesday 18 February 2026 06:44:48 +0000 (0:00:01.180) 0:53:37.699 **** 2026-02-18 06:45:31.874443 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874449 | orchestrator | 2026-02-18 06:45:31.874456 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:45:31.874462 | orchestrator | Wednesday 18 February 2026 06:44:49 +0000 (0:00:01.173) 0:53:38.873 **** 2026-02-18 06:45:31.874468 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874474 | orchestrator | 2026-02-18 06:45:31.874481 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:45:31.874487 | orchestrator | Wednesday 18 February 2026 06:44:51 +0000 (0:00:01.211) 0:53:40.084 **** 2026-02-18 06:45:31.874493 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874499 | orchestrator | 2026-02-18 06:45:31.874505 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:45:31.874511 | orchestrator | Wednesday 18 February 2026 06:44:52 +0000 (0:00:01.158) 0:53:41.243 **** 2026-02-18 06:45:31.874517 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874523 | orchestrator | 2026-02-18 06:45:31.874529 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:45:31.874535 | orchestrator | Wednesday 18 February 2026 06:44:53 +0000 (0:00:01.137) 0:53:42.380 **** 2026-02-18 06:45:31.874542 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:45:31.874548 | orchestrator | 2026-02-18 06:45:31.874554 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:45:31.874560 | orchestrator | Wednesday 18 February 2026 06:44:56 +0000 (0:00:02.500) 0:53:44.881 **** 2026-02-18 06:45:31.874566 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:45:31.874572 | orchestrator | 2026-02-18 06:45:31.874578 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:45:31.874584 | orchestrator | Wednesday 18 February 2026 06:44:57 +0000 (0:00:01.146) 0:53:46.028 **** 2026-02-18 06:45:31.874590 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-18 06:45:31.874596 | orchestrator | 2026-02-18 06:45:31.874602 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:45:31.874608 | orchestrator | Wednesday 18 February 2026 06:44:58 +0000 (0:00:01.148) 0:53:47.177 **** 2026-02-18 06:45:31.874614 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874620 | orchestrator | 2026-02-18 06:45:31.874627 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:45:31.874633 | orchestrator | Wednesday 18 February 2026 06:44:59 +0000 (0:00:01.172) 0:53:48.349 **** 2026-02-18 06:45:31.874639 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874645 | orchestrator | 2026-02-18 06:45:31.874663 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:45:31.874669 | orchestrator | Wednesday 18 February 2026 06:45:00 +0000 (0:00:01.213) 0:53:49.563 **** 2026-02-18 06:45:31.874681 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874687 | orchestrator | 2026-02-18 06:45:31.874693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:45:31.874699 | orchestrator | Wednesday 18 February 2026 06:45:01 +0000 (0:00:01.154) 0:53:50.717 **** 2026-02-18 06:45:31.874705 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874711 | orchestrator | 2026-02-18 06:45:31.874717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:45:31.874723 | orchestrator | Wednesday 18 February 2026 06:45:02 +0000 (0:00:01.151) 0:53:51.869 **** 2026-02-18 06:45:31.874729 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874735 | orchestrator | 2026-02-18 06:45:31.874741 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:45:31.874747 | orchestrator | Wednesday 18 February 2026 06:45:04 +0000 (0:00:01.190) 0:53:53.059 **** 2026-02-18 06:45:31.874753 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874759 | orchestrator | 2026-02-18 06:45:31.874765 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:45:31.874771 | orchestrator | Wednesday 18 February 2026 06:45:05 +0000 (0:00:01.205) 0:53:54.265 **** 2026-02-18 06:45:31.874779 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874787 | orchestrator | 2026-02-18 06:45:31.874794 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:45:31.874819 | orchestrator | Wednesday 18 February 2026 06:45:06 +0000 (0:00:01.172) 0:53:55.438 **** 2026-02-18 06:45:31.874827 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.874833 | orchestrator | 2026-02-18 06:45:31.874840 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:45:31.874847 | orchestrator | Wednesday 18 February 2026 06:45:07 +0000 (0:00:01.132) 0:53:56.571 **** 2026-02-18 06:45:31.874853 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:45:31.874860 | orchestrator | 2026-02-18 06:45:31.874867 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:45:31.874887 | orchestrator | Wednesday 18 February 2026 06:45:08 +0000 (0:00:01.239) 0:53:57.810 **** 2026-02-18 06:45:31.874894 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-18 06:45:31.874902 | orchestrator | 2026-02-18 06:45:31.874909 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:45:31.874916 | orchestrator | Wednesday 18 February 2026 06:45:10 +0000 (0:00:01.163) 0:53:58.973 **** 2026-02-18 06:45:31.874923 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-18 06:45:31.874931 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-18 06:45:31.874938 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-18 06:45:31.874945 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-18 06:45:31.874952 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-18 06:45:31.874959 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-18 06:45:31.874967 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-18 06:45:31.874973 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:45:31.874980 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:45:31.874987 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:45:31.874994 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:45:31.875001 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:45:31.875008 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:45:31.875015 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:45:31.875022 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-18 06:45:31.875029 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-18 06:45:31.875041 | orchestrator | 2026-02-18 06:45:31.875048 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:45:31.875055 | orchestrator | Wednesday 18 February 2026 06:45:16 +0000 (0:00:06.537) 0:54:05.511 **** 2026-02-18 06:45:31.875062 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-18 06:45:31.875069 | orchestrator | 2026-02-18 06:45:31.875076 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 06:45:31.875083 | orchestrator | Wednesday 18 February 2026 06:45:17 +0000 (0:00:01.285) 0:54:06.796 **** 2026-02-18 06:45:31.875091 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 06:45:31.875099 | orchestrator | 2026-02-18 06:45:31.875106 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 06:45:31.875113 | orchestrator | Wednesday 18 February 2026 06:45:19 +0000 (0:00:01.487) 0:54:08.284 **** 2026-02-18 06:45:31.875120 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 06:45:31.875127 | orchestrator | 2026-02-18 06:45:31.875133 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:45:31.875140 | orchestrator | Wednesday 18 February 2026 06:45:21 +0000 (0:00:01.981) 0:54:10.266 **** 2026-02-18 06:45:31.875147 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875154 | orchestrator | 2026-02-18 06:45:31.875161 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:45:31.875168 | orchestrator | Wednesday 18 February 2026 06:45:22 +0000 (0:00:01.126) 0:54:11.393 **** 2026-02-18 06:45:31.875179 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875185 | orchestrator | 2026-02-18 06:45:31.875191 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:45:31.875197 | orchestrator | Wednesday 18 February 2026 06:45:23 +0000 (0:00:01.142) 0:54:12.535 **** 2026-02-18 06:45:31.875203 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875209 | orchestrator | 2026-02-18 06:45:31.875215 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:45:31.875221 | orchestrator | Wednesday 18 February 2026 06:45:24 +0000 (0:00:01.132) 0:54:13.667 **** 2026-02-18 06:45:31.875227 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875234 | orchestrator | 2026-02-18 06:45:31.875240 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:45:31.875246 | orchestrator | Wednesday 18 February 2026 06:45:25 +0000 (0:00:01.167) 0:54:14.835 **** 2026-02-18 06:45:31.875252 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875258 | orchestrator | 2026-02-18 06:45:31.875264 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:45:31.875270 | orchestrator | Wednesday 18 February 2026 06:45:27 +0000 (0:00:01.136) 0:54:15.972 **** 2026-02-18 06:45:31.875276 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875282 | orchestrator | 2026-02-18 06:45:31.875289 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:45:31.875295 | orchestrator | Wednesday 18 February 2026 06:45:28 +0000 (0:00:01.133) 0:54:17.106 **** 2026-02-18 06:45:31.875301 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875307 | orchestrator | 2026-02-18 06:45:31.875313 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:45:31.875319 | orchestrator | Wednesday 18 February 2026 06:45:29 +0000 (0:00:01.238) 0:54:18.345 **** 2026-02-18 06:45:31.875325 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875331 | orchestrator | 2026-02-18 06:45:31.875337 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:45:31.875343 | orchestrator | Wednesday 18 February 2026 06:45:30 +0000 (0:00:01.134) 0:54:19.480 **** 2026-02-18 06:45:31.875353 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:45:31.875360 | orchestrator | 2026-02-18 06:45:31.875370 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:46:28.212600 | orchestrator | Wednesday 18 February 2026 06:45:31 +0000 (0:00:01.258) 0:54:20.738 **** 2026-02-18 06:46:28.212722 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.212740 | orchestrator | 2026-02-18 06:46:28.212753 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:46:28.212765 | orchestrator | Wednesday 18 February 2026 06:45:32 +0000 (0:00:01.103) 0:54:21.841 **** 2026-02-18 06:46:28.212776 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.212787 | orchestrator | 2026-02-18 06:46:28.212798 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:46:28.212856 | orchestrator | Wednesday 18 February 2026 06:45:34 +0000 (0:00:01.131) 0:54:22.973 **** 2026-02-18 06:46:28.212868 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:46:28.212879 | orchestrator | 2026-02-18 06:46:28.212890 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:46:28.212901 | orchestrator | Wednesday 18 February 2026 06:45:38 +0000 (0:00:04.753) 0:54:27.727 **** 2026-02-18 06:46:28.212913 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 06:46:28.212925 | orchestrator | 2026-02-18 06:46:28.212936 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:46:28.212947 | orchestrator | Wednesday 18 February 2026 06:45:40 +0000 (0:00:01.207) 0:54:28.934 **** 2026-02-18 06:46:28.212960 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-18 06:46:28.212975 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-18 06:46:28.212987 | orchestrator | 2026-02-18 06:46:28.212998 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:46:28.213009 | orchestrator | Wednesday 18 February 2026 06:45:44 +0000 (0:00:04.719) 0:54:33.654 **** 2026-02-18 06:46:28.213020 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213031 | orchestrator | 2026-02-18 06:46:28.213042 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:46:28.213053 | orchestrator | Wednesday 18 February 2026 06:45:45 +0000 (0:00:01.138) 0:54:34.792 **** 2026-02-18 06:46:28.213064 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213075 | orchestrator | 2026-02-18 06:46:28.213086 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:46:28.213097 | orchestrator | Wednesday 18 February 2026 06:45:47 +0000 (0:00:01.213) 0:54:36.006 **** 2026-02-18 06:46:28.213108 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213119 | orchestrator | 2026-02-18 06:46:28.213130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:46:28.213159 | orchestrator | Wednesday 18 February 2026 06:45:48 +0000 (0:00:01.142) 0:54:37.149 **** 2026-02-18 06:46:28.213172 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213185 | orchestrator | 2026-02-18 06:46:28.213198 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:46:28.213210 | orchestrator | Wednesday 18 February 2026 06:45:49 +0000 (0:00:01.184) 0:54:38.333 **** 2026-02-18 06:46:28.213222 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213256 | orchestrator | 2026-02-18 06:46:28.213268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:46:28.213281 | orchestrator | Wednesday 18 February 2026 06:45:50 +0000 (0:00:01.152) 0:54:39.486 **** 2026-02-18 06:46:28.213294 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.213307 | orchestrator | 2026-02-18 06:46:28.213320 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:46:28.213332 | orchestrator | Wednesday 18 February 2026 06:45:51 +0000 (0:00:01.239) 0:54:40.726 **** 2026-02-18 06:46:28.213344 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:46:28.213357 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:46:28.213370 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:46:28.213383 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213395 | orchestrator | 2026-02-18 06:46:28.213407 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:46:28.213420 | orchestrator | Wednesday 18 February 2026 06:45:53 +0000 (0:00:01.427) 0:54:42.154 **** 2026-02-18 06:46:28.213432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:46:28.213445 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:46:28.213457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:46:28.213470 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213482 | orchestrator | 2026-02-18 06:46:28.213495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:46:28.213506 | orchestrator | Wednesday 18 February 2026 06:45:54 +0000 (0:00:01.485) 0:54:43.640 **** 2026-02-18 06:46:28.213517 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 06:46:28.213528 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 06:46:28.213539 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 06:46:28.213566 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213578 | orchestrator | 2026-02-18 06:46:28.213589 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:46:28.213600 | orchestrator | Wednesday 18 February 2026 06:45:56 +0000 (0:00:01.806) 0:54:45.446 **** 2026-02-18 06:46:28.213611 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.213622 | orchestrator | 2026-02-18 06:46:28.213633 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:46:28.213644 | orchestrator | Wednesday 18 February 2026 06:45:57 +0000 (0:00:01.202) 0:54:46.650 **** 2026-02-18 06:46:28.213655 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 06:46:28.213666 | orchestrator | 2026-02-18 06:46:28.213677 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:46:28.213688 | orchestrator | Wednesday 18 February 2026 06:45:59 +0000 (0:00:01.871) 0:54:48.521 **** 2026-02-18 06:46:28.213699 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.213709 | orchestrator | 2026-02-18 06:46:28.213720 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-18 06:46:28.213731 | orchestrator | Wednesday 18 February 2026 06:46:01 +0000 (0:00:01.815) 0:54:50.336 **** 2026-02-18 06:46:28.213742 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.213753 | orchestrator | 2026-02-18 06:46:28.213763 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-18 06:46:28.213774 | orchestrator | Wednesday 18 February 2026 06:46:02 +0000 (0:00:01.145) 0:54:51.482 **** 2026-02-18 06:46:28.213785 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-02-18 06:46:28.213796 | orchestrator | 2026-02-18 06:46:28.213827 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-18 06:46:28.213839 | orchestrator | Wednesday 18 February 2026 06:46:04 +0000 (0:00:01.526) 0:54:53.009 **** 2026-02-18 06:46:28.213850 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-18 06:46:28.213861 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-18 06:46:28.213881 | orchestrator | 2026-02-18 06:46:28.213892 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-18 06:46:28.213903 | orchestrator | Wednesday 18 February 2026 06:46:06 +0000 (0:00:01.896) 0:54:54.905 **** 2026-02-18 06:46:28.213913 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:46:28.213924 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 06:46:28.213935 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:46:28.213946 | orchestrator | 2026-02-18 06:46:28.213957 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:46:28.213967 | orchestrator | Wednesday 18 February 2026 06:46:09 +0000 (0:00:03.167) 0:54:58.072 **** 2026-02-18 06:46:28.213978 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-18 06:46:28.213989 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 06:46:28.214000 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.214011 | orchestrator | 2026-02-18 06:46:28.214077 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-18 06:46:28.214089 | orchestrator | Wednesday 18 February 2026 06:46:11 +0000 (0:00:01.976) 0:55:00.049 **** 2026-02-18 06:46:28.214100 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.214111 | orchestrator | 2026-02-18 06:46:28.214121 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-18 06:46:28.214132 | orchestrator | Wednesday 18 February 2026 06:46:12 +0000 (0:00:01.572) 0:55:01.621 **** 2026-02-18 06:46:28.214143 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:46:28.214153 | orchestrator | 2026-02-18 06:46:28.214171 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-18 06:46:28.214182 | orchestrator | Wednesday 18 February 2026 06:46:13 +0000 (0:00:01.220) 0:55:02.842 **** 2026-02-18 06:46:28.214193 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-02-18 06:46:28.214204 | orchestrator | 2026-02-18 06:46:28.214215 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-18 06:46:28.214226 | orchestrator | Wednesday 18 February 2026 06:46:15 +0000 (0:00:01.624) 0:55:04.466 **** 2026-02-18 06:46:28.214237 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-02-18 06:46:28.214248 | orchestrator | 2026-02-18 06:46:28.214259 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-18 06:46:28.214269 | orchestrator | Wednesday 18 February 2026 06:46:17 +0000 (0:00:01.486) 0:55:05.953 **** 2026-02-18 06:46:28.214280 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.214291 | orchestrator | 2026-02-18 06:46:28.214302 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-18 06:46:28.214313 | orchestrator | Wednesday 18 February 2026 06:46:19 +0000 (0:00:02.005) 0:55:07.959 **** 2026-02-18 06:46:28.214324 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.214335 | orchestrator | 2026-02-18 06:46:28.214346 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-18 06:46:28.214356 | orchestrator | Wednesday 18 February 2026 06:46:20 +0000 (0:00:01.862) 0:55:09.821 **** 2026-02-18 06:46:28.214367 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.214378 | orchestrator | 2026-02-18 06:46:28.214389 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-18 06:46:28.214400 | orchestrator | Wednesday 18 February 2026 06:46:23 +0000 (0:00:02.204) 0:55:12.027 **** 2026-02-18 06:46:28.214410 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.214421 | orchestrator | 2026-02-18 06:46:28.214432 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-18 06:46:28.214443 | orchestrator | Wednesday 18 February 2026 06:46:25 +0000 (0:00:02.272) 0:55:14.299 **** 2026-02-18 06:46:28.214454 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:46:28.214465 | orchestrator | 2026-02-18 06:46:28.214476 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-18 06:46:28.214494 | orchestrator | Wednesday 18 February 2026 06:46:27 +0000 (0:00:01.621) 0:55:15.921 **** 2026-02-18 06:46:28.214514 | orchestrator | skipping: [testbed-node-5] 2026-02-18 06:47:02.881945 | orchestrator | 2026-02-18 06:47:02.882131 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-18 06:47:02.882152 | orchestrator | Wednesday 18 February 2026 06:46:28 +0000 (0:00:01.155) 0:55:17.076 **** 2026-02-18 06:47:02.882165 | orchestrator | ok: [testbed-node-5] 2026-02-18 06:47:02.882178 | orchestrator | 2026-02-18 06:47:02.882189 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-18 06:47:02.882200 | orchestrator | 2026-02-18 06:47:02.882212 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:47:02.882223 | orchestrator | Wednesday 18 February 2026 06:46:37 +0000 (0:00:09.182) 0:55:26.259 **** 2026-02-18 06:47:02.882234 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4 2026-02-18 06:47:02.882246 | orchestrator | 2026-02-18 06:47:02.882258 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:47:02.882269 | orchestrator | Wednesday 18 February 2026 06:46:38 +0000 (0:00:01.557) 0:55:27.816 **** 2026-02-18 06:47:02.882280 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882291 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882302 | orchestrator | 2026-02-18 06:47:02.882313 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:47:02.882324 | orchestrator | Wednesday 18 February 2026 06:46:40 +0000 (0:00:01.616) 0:55:29.433 **** 2026-02-18 06:47:02.882335 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882346 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882357 | orchestrator | 2026-02-18 06:47:02.882368 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:47:02.882379 | orchestrator | Wednesday 18 February 2026 06:46:41 +0000 (0:00:01.256) 0:55:30.689 **** 2026-02-18 06:47:02.882389 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882400 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882411 | orchestrator | 2026-02-18 06:47:02.882422 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:47:02.882433 | orchestrator | Wednesday 18 February 2026 06:46:43 +0000 (0:00:01.585) 0:55:32.275 **** 2026-02-18 06:47:02.882446 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882459 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882472 | orchestrator | 2026-02-18 06:47:02.882485 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:47:02.882498 | orchestrator | Wednesday 18 February 2026 06:46:44 +0000 (0:00:01.280) 0:55:33.555 **** 2026-02-18 06:47:02.882511 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882523 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882535 | orchestrator | 2026-02-18 06:47:02.882548 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:47:02.882561 | orchestrator | Wednesday 18 February 2026 06:46:46 +0000 (0:00:01.362) 0:55:34.918 **** 2026-02-18 06:47:02.882574 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882586 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882598 | orchestrator | 2026-02-18 06:47:02.882611 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:47:02.882623 | orchestrator | Wednesday 18 February 2026 06:46:47 +0000 (0:00:01.425) 0:55:36.344 **** 2026-02-18 06:47:02.882636 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:02.882649 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:02.882662 | orchestrator | 2026-02-18 06:47:02.882674 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:47:02.882686 | orchestrator | Wednesday 18 February 2026 06:46:48 +0000 (0:00:01.242) 0:55:37.586 **** 2026-02-18 06:47:02.882699 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882711 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882723 | orchestrator | 2026-02-18 06:47:02.882736 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:47:02.882790 | orchestrator | Wednesday 18 February 2026 06:46:50 +0000 (0:00:01.300) 0:55:38.887 **** 2026-02-18 06:47:02.882804 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:47:02.882837 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:47:02.882848 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:47:02.882859 | orchestrator | 2026-02-18 06:47:02.882869 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:47:02.882880 | orchestrator | Wednesday 18 February 2026 06:46:51 +0000 (0:00:01.737) 0:55:40.624 **** 2026-02-18 06:47:02.882891 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:02.882902 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:02.882913 | orchestrator | 2026-02-18 06:47:02.882924 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:47:02.882935 | orchestrator | Wednesday 18 February 2026 06:46:53 +0000 (0:00:01.388) 0:55:42.013 **** 2026-02-18 06:47:02.882946 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:47:02.882957 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:47:02.882968 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:47:02.882979 | orchestrator | 2026-02-18 06:47:02.882989 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:47:02.883000 | orchestrator | Wednesday 18 February 2026 06:46:56 +0000 (0:00:03.303) 0:55:45.316 **** 2026-02-18 06:47:02.883011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 06:47:02.883022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 06:47:02.883033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 06:47:02.883044 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:02.883055 | orchestrator | 2026-02-18 06:47:02.883066 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:47:02.883077 | orchestrator | Wednesday 18 February 2026 06:46:58 +0000 (0:00:01.945) 0:55:47.262 **** 2026-02-18 06:47:02.883106 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:47:02.883122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:47:02.883133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:47:02.883144 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:02.883155 | orchestrator | 2026-02-18 06:47:02.883166 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:47:02.883177 | orchestrator | Wednesday 18 February 2026 06:47:00 +0000 (0:00:02.062) 0:55:49.325 **** 2026-02-18 06:47:02.883190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:02.883204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:02.883224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:02.883235 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:02.883246 | orchestrator | 2026-02-18 06:47:02.883257 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:47:02.883267 | orchestrator | Wednesday 18 February 2026 06:47:01 +0000 (0:00:01.191) 0:55:50.517 **** 2026-02-18 06:47:02.883285 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:46:53.665249', 'end': '2026-02-18 06:46:53.713461', 'delta': '0:00:00.048212', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:47:02.883300 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:46:54.625480', 'end': '2026-02-18 06:46:54.678442', 'delta': '0:00:00.052962', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:47:02.883321 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:46:55.209844', 'end': '2026-02-18 06:46:55.262051', 'delta': '0:00:00.052207', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:47:22.506966 | orchestrator | 2026-02-18 06:47:22.507090 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:47:22.507109 | orchestrator | Wednesday 18 February 2026 06:47:02 +0000 (0:00:01.230) 0:55:51.748 **** 2026-02-18 06:47:22.507121 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:22.507133 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:22.507145 | orchestrator | 2026-02-18 06:47:22.507156 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:47:22.507168 | orchestrator | Wednesday 18 February 2026 06:47:04 +0000 (0:00:01.420) 0:55:53.169 **** 2026-02-18 06:47:22.507179 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.507214 | orchestrator | 2026-02-18 06:47:22.507226 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:47:22.507238 | orchestrator | Wednesday 18 February 2026 06:47:05 +0000 (0:00:01.260) 0:55:54.430 **** 2026-02-18 06:47:22.507249 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:22.507260 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:22.507271 | orchestrator | 2026-02-18 06:47:22.507282 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:47:22.507293 | orchestrator | Wednesday 18 February 2026 06:47:06 +0000 (0:00:01.234) 0:55:55.664 **** 2026-02-18 06:47:22.507304 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:47:22.507315 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:47:22.507326 | orchestrator | 2026-02-18 06:47:22.507337 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:47:22.507348 | orchestrator | Wednesday 18 February 2026 06:47:09 +0000 (0:00:02.227) 0:55:57.892 **** 2026-02-18 06:47:22.507359 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:22.507370 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:22.507381 | orchestrator | 2026-02-18 06:47:22.507392 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:47:22.507404 | orchestrator | Wednesday 18 February 2026 06:47:10 +0000 (0:00:01.312) 0:55:59.205 **** 2026-02-18 06:47:22.507414 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.507426 | orchestrator | 2026-02-18 06:47:22.507455 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:47:22.507469 | orchestrator | Wednesday 18 February 2026 06:47:11 +0000 (0:00:01.118) 0:56:00.324 **** 2026-02-18 06:47:22.507481 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.507506 | orchestrator | 2026-02-18 06:47:22.507519 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:47:22.507532 | orchestrator | Wednesday 18 February 2026 06:47:12 +0000 (0:00:01.248) 0:56:01.573 **** 2026-02-18 06:47:22.507543 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.507556 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:22.507568 | orchestrator | 2026-02-18 06:47:22.507581 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:47:22.507607 | orchestrator | Wednesday 18 February 2026 06:47:14 +0000 (0:00:01.646) 0:56:03.219 **** 2026-02-18 06:47:22.507620 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.507632 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:22.507644 | orchestrator | 2026-02-18 06:47:22.507657 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:47:22.507669 | orchestrator | Wednesday 18 February 2026 06:47:15 +0000 (0:00:01.271) 0:56:04.490 **** 2026-02-18 06:47:22.507682 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:22.507694 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:22.507707 | orchestrator | 2026-02-18 06:47:22.507719 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:47:22.507732 | orchestrator | Wednesday 18 February 2026 06:47:16 +0000 (0:00:01.252) 0:56:05.743 **** 2026-02-18 06:47:22.507744 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.507756 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:22.507769 | orchestrator | 2026-02-18 06:47:22.507781 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:47:22.507792 | orchestrator | Wednesday 18 February 2026 06:47:18 +0000 (0:00:01.281) 0:56:07.025 **** 2026-02-18 06:47:22.507803 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:22.507841 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:22.507853 | orchestrator | 2026-02-18 06:47:22.507864 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:47:22.507875 | orchestrator | Wednesday 18 February 2026 06:47:19 +0000 (0:00:01.318) 0:56:08.344 **** 2026-02-18 06:47:22.507886 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.507896 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:22.507916 | orchestrator | 2026-02-18 06:47:22.507927 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:47:22.507939 | orchestrator | Wednesday 18 February 2026 06:47:20 +0000 (0:00:01.241) 0:56:09.586 **** 2026-02-18 06:47:22.507950 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:22.507961 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:22.507972 | orchestrator | 2026-02-18 06:47:22.507983 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:47:22.507994 | orchestrator | Wednesday 18 February 2026 06:47:21 +0000 (0:00:01.261) 0:56:10.848 **** 2026-02-18 06:47:22.508008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.508042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}})  2026-02-18 06:47:22.508057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:47:22.508070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}})  2026-02-18 06:47:22.508088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.508101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.508119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:47:22.508131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.508150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:47:22.650466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.650571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}})  2026-02-18 06:47:22.650604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}})  2026-02-18 06:47:22.650618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.650675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:47:22.650690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.650703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.650715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.650732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:47:22.650753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}})  2026-02-18 06:47:22.650766 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:22.650780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:47:22.650800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}})  2026-02-18 06:47:22.754714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.754797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.754871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:47:22.754910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.754919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:47:22.754926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.754935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}})  2026-02-18 06:47:22.754958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}})  2026-02-18 06:47:22.754966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.754980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:47:22.754995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.755002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:47:22.755015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:47:24.175873 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:24.175972 | orchestrator | 2026-02-18 06:47:24.175989 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:47:24.176002 | orchestrator | Wednesday 18 February 2026 06:47:23 +0000 (0:00:01.946) 0:56:12.795 **** 2026-02-18 06:47:24.176019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.176220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267519 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267636 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267652 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.267676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398275 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:24.398290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398303 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398315 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398429 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:24.398458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:54.278996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:54.279141 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:54.279169 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:47:54.279190 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.279231 | orchestrator | 2026-02-18 06:47:54.279244 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:47:54.279257 | orchestrator | Wednesday 18 February 2026 06:47:25 +0000 (0:00:01.600) 0:56:14.396 **** 2026-02-18 06:47:54.279268 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:54.279280 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:54.279291 | orchestrator | 2026-02-18 06:47:54.279302 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:47:54.279313 | orchestrator | Wednesday 18 February 2026 06:47:27 +0000 (0:00:01.671) 0:56:16.067 **** 2026-02-18 06:47:54.279324 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:54.279343 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:54.279359 | orchestrator | 2026-02-18 06:47:54.279377 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:47:54.279394 | orchestrator | Wednesday 18 February 2026 06:47:28 +0000 (0:00:01.234) 0:56:17.302 **** 2026-02-18 06:47:54.279412 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:54.279467 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:54.279488 | orchestrator | 2026-02-18 06:47:54.279506 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:47:54.279527 | orchestrator | Wednesday 18 February 2026 06:47:30 +0000 (0:00:01.662) 0:56:18.964 **** 2026-02-18 06:47:54.279547 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.279567 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.279580 | orchestrator | 2026-02-18 06:47:54.279593 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:47:54.279606 | orchestrator | Wednesday 18 February 2026 06:47:31 +0000 (0:00:01.228) 0:56:20.193 **** 2026-02-18 06:47:54.279618 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.279631 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.279643 | orchestrator | 2026-02-18 06:47:54.279656 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:47:54.279669 | orchestrator | Wednesday 18 February 2026 06:47:33 +0000 (0:00:01.737) 0:56:21.931 **** 2026-02-18 06:47:54.279681 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.279693 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.279705 | orchestrator | 2026-02-18 06:47:54.279732 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:47:54.279745 | orchestrator | Wednesday 18 February 2026 06:47:34 +0000 (0:00:01.319) 0:56:23.251 **** 2026-02-18 06:47:54.279758 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-18 06:47:54.279770 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-18 06:47:54.279782 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-18 06:47:54.279795 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-18 06:47:54.279807 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-18 06:47:54.279851 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-18 06:47:54.279865 | orchestrator | 2026-02-18 06:47:54.279877 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:47:54.279888 | orchestrator | Wednesday 18 February 2026 06:47:36 +0000 (0:00:01.897) 0:56:25.148 **** 2026-02-18 06:47:54.279919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 06:47:54.279931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 06:47:54.279942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 06:47:54.279953 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.279964 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 06:47:54.279974 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 06:47:54.279985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 06:47:54.279996 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.280007 | orchestrator | 2026-02-18 06:47:54.280018 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:47:54.280040 | orchestrator | Wednesday 18 February 2026 06:47:37 +0000 (0:00:01.324) 0:56:26.472 **** 2026-02-18 06:47:54.280052 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4 2026-02-18 06:47:54.280064 | orchestrator | 2026-02-18 06:47:54.280075 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:47:54.280087 | orchestrator | Wednesday 18 February 2026 06:47:38 +0000 (0:00:01.268) 0:56:27.741 **** 2026-02-18 06:47:54.280098 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.280109 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.280120 | orchestrator | 2026-02-18 06:47:54.280130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:47:54.280141 | orchestrator | Wednesday 18 February 2026 06:47:40 +0000 (0:00:01.373) 0:56:29.115 **** 2026-02-18 06:47:54.280152 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.280163 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.280271 | orchestrator | 2026-02-18 06:47:54.280285 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:47:54.280296 | orchestrator | Wednesday 18 February 2026 06:47:41 +0000 (0:00:01.562) 0:56:30.677 **** 2026-02-18 06:47:54.280307 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.280318 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:47:54.280329 | orchestrator | 2026-02-18 06:47:54.280340 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:47:54.280350 | orchestrator | Wednesday 18 February 2026 06:47:43 +0000 (0:00:01.253) 0:56:31.930 **** 2026-02-18 06:47:54.280376 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:54.280397 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:54.280408 | orchestrator | 2026-02-18 06:47:54.280419 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:47:54.280430 | orchestrator | Wednesday 18 February 2026 06:47:44 +0000 (0:00:01.402) 0:56:33.333 **** 2026-02-18 06:47:54.280441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:47:54.280451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:47:54.280462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:47:54.280473 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.280483 | orchestrator | 2026-02-18 06:47:54.280494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:47:54.280505 | orchestrator | Wednesday 18 February 2026 06:47:45 +0000 (0:00:01.400) 0:56:34.734 **** 2026-02-18 06:47:54.280515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:47:54.280526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:47:54.280537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:47:54.280547 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.280558 | orchestrator | 2026-02-18 06:47:54.280568 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:47:54.280579 | orchestrator | Wednesday 18 February 2026 06:47:47 +0000 (0:00:01.452) 0:56:36.186 **** 2026-02-18 06:47:54.280590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:47:54.280600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:47:54.280611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:47:54.280621 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:47:54.280632 | orchestrator | 2026-02-18 06:47:54.280643 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:47:54.280654 | orchestrator | Wednesday 18 February 2026 06:47:48 +0000 (0:00:01.429) 0:56:37.616 **** 2026-02-18 06:47:54.280664 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:47:54.280675 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:47:54.280686 | orchestrator | 2026-02-18 06:47:54.280696 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:47:54.280715 | orchestrator | Wednesday 18 February 2026 06:47:50 +0000 (0:00:01.312) 0:56:38.928 **** 2026-02-18 06:47:54.280726 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:47:54.280737 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 06:47:54.280747 | orchestrator | 2026-02-18 06:47:54.280765 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:47:54.280776 | orchestrator | Wednesday 18 February 2026 06:47:51 +0000 (0:00:01.928) 0:56:40.857 **** 2026-02-18 06:47:54.280787 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:47:54.280797 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:47:54.280808 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:47:54.280845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 06:47:54.280862 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:47:54.280878 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:47:54.280906 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:48:40.175882 | orchestrator | 2026-02-18 06:48:40.176002 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:48:40.176019 | orchestrator | Wednesday 18 February 2026 06:47:54 +0000 (0:00:02.279) 0:56:43.136 **** 2026-02-18 06:48:40.176031 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:48:40.176043 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:48:40.176054 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:48:40.176066 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 06:48:40.176077 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:48:40.176088 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:48:40.176099 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:48:40.176110 | orchestrator | 2026-02-18 06:48:40.176121 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-18 06:48:40.176132 | orchestrator | Wednesday 18 February 2026 06:47:56 +0000 (0:00:02.668) 0:56:45.805 **** 2026-02-18 06:48:40.176143 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.176154 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.176165 | orchestrator | 2026-02-18 06:48:40.176176 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:48:40.176187 | orchestrator | Wednesday 18 February 2026 06:47:58 +0000 (0:00:01.347) 0:56:47.153 **** 2026-02-18 06:48:40.176198 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4 2026-02-18 06:48:40.176209 | orchestrator | 2026-02-18 06:48:40.176219 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:48:40.176230 | orchestrator | Wednesday 18 February 2026 06:47:59 +0000 (0:00:01.273) 0:56:48.427 **** 2026-02-18 06:48:40.176241 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4 2026-02-18 06:48:40.176252 | orchestrator | 2026-02-18 06:48:40.176263 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:48:40.176273 | orchestrator | Wednesday 18 February 2026 06:48:01 +0000 (0:00:01.509) 0:56:49.936 **** 2026-02-18 06:48:40.176284 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.176295 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.176306 | orchestrator | 2026-02-18 06:48:40.176316 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:48:40.176354 | orchestrator | Wednesday 18 February 2026 06:48:02 +0000 (0:00:01.285) 0:56:51.222 **** 2026-02-18 06:48:40.176368 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.176380 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.176393 | orchestrator | 2026-02-18 06:48:40.176405 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:48:40.176417 | orchestrator | Wednesday 18 February 2026 06:48:04 +0000 (0:00:01.667) 0:56:52.890 **** 2026-02-18 06:48:40.176430 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.176443 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.176454 | orchestrator | 2026-02-18 06:48:40.176467 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:48:40.176480 | orchestrator | Wednesday 18 February 2026 06:48:05 +0000 (0:00:01.680) 0:56:54.570 **** 2026-02-18 06:48:40.176492 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.176503 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.176514 | orchestrator | 2026-02-18 06:48:40.176524 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:48:40.176535 | orchestrator | Wednesday 18 February 2026 06:48:07 +0000 (0:00:01.680) 0:56:56.250 **** 2026-02-18 06:48:40.176546 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.176557 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.176568 | orchestrator | 2026-02-18 06:48:40.176578 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:48:40.176589 | orchestrator | Wednesday 18 February 2026 06:48:08 +0000 (0:00:01.245) 0:56:57.496 **** 2026-02-18 06:48:40.176600 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.176611 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.176621 | orchestrator | 2026-02-18 06:48:40.176632 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:48:40.176643 | orchestrator | Wednesday 18 February 2026 06:48:09 +0000 (0:00:01.341) 0:56:58.837 **** 2026-02-18 06:48:40.176654 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.176664 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.176675 | orchestrator | 2026-02-18 06:48:40.176686 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:48:40.176696 | orchestrator | Wednesday 18 February 2026 06:48:11 +0000 (0:00:01.596) 0:57:00.434 **** 2026-02-18 06:48:40.176707 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.176732 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.176743 | orchestrator | 2026-02-18 06:48:40.176754 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:48:40.176765 | orchestrator | Wednesday 18 February 2026 06:48:13 +0000 (0:00:01.702) 0:57:02.136 **** 2026-02-18 06:48:40.176776 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.176786 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.176797 | orchestrator | 2026-02-18 06:48:40.176808 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:48:40.176818 | orchestrator | Wednesday 18 February 2026 06:48:14 +0000 (0:00:01.720) 0:57:03.857 **** 2026-02-18 06:48:40.176851 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.176862 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.176873 | orchestrator | 2026-02-18 06:48:40.176884 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:48:40.176894 | orchestrator | Wednesday 18 February 2026 06:48:16 +0000 (0:00:01.272) 0:57:05.130 **** 2026-02-18 06:48:40.176905 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.176934 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.176946 | orchestrator | 2026-02-18 06:48:40.176961 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:48:40.176979 | orchestrator | Wednesday 18 February 2026 06:48:17 +0000 (0:00:01.322) 0:57:06.453 **** 2026-02-18 06:48:40.176997 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.177013 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.177032 | orchestrator | 2026-02-18 06:48:40.177049 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:48:40.177082 | orchestrator | Wednesday 18 February 2026 06:48:18 +0000 (0:00:01.272) 0:57:07.725 **** 2026-02-18 06:48:40.177102 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.177114 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.177125 | orchestrator | 2026-02-18 06:48:40.177136 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:48:40.177147 | orchestrator | Wednesday 18 February 2026 06:48:20 +0000 (0:00:01.330) 0:57:09.056 **** 2026-02-18 06:48:40.177158 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.177169 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.177179 | orchestrator | 2026-02-18 06:48:40.177190 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:48:40.177201 | orchestrator | Wednesday 18 February 2026 06:48:21 +0000 (0:00:01.679) 0:57:10.735 **** 2026-02-18 06:48:40.177212 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177223 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177234 | orchestrator | 2026-02-18 06:48:40.177244 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:48:40.177255 | orchestrator | Wednesday 18 February 2026 06:48:23 +0000 (0:00:01.232) 0:57:11.968 **** 2026-02-18 06:48:40.177266 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177277 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177287 | orchestrator | 2026-02-18 06:48:40.177298 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:48:40.177309 | orchestrator | Wednesday 18 February 2026 06:48:24 +0000 (0:00:01.324) 0:57:13.293 **** 2026-02-18 06:48:40.177320 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177331 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177342 | orchestrator | 2026-02-18 06:48:40.177353 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:48:40.177364 | orchestrator | Wednesday 18 February 2026 06:48:25 +0000 (0:00:01.282) 0:57:14.575 **** 2026-02-18 06:48:40.177375 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.177385 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.177396 | orchestrator | 2026-02-18 06:48:40.177407 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:48:40.177418 | orchestrator | Wednesday 18 February 2026 06:48:26 +0000 (0:00:01.231) 0:57:15.807 **** 2026-02-18 06:48:40.177429 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:48:40.177440 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:48:40.177451 | orchestrator | 2026-02-18 06:48:40.177462 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:48:40.177472 | orchestrator | Wednesday 18 February 2026 06:48:28 +0000 (0:00:01.485) 0:57:17.292 **** 2026-02-18 06:48:40.177483 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177494 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177505 | orchestrator | 2026-02-18 06:48:40.177516 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:48:40.177527 | orchestrator | Wednesday 18 February 2026 06:48:29 +0000 (0:00:01.405) 0:57:18.697 **** 2026-02-18 06:48:40.177537 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177548 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177559 | orchestrator | 2026-02-18 06:48:40.177570 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:48:40.177581 | orchestrator | Wednesday 18 February 2026 06:48:31 +0000 (0:00:01.259) 0:57:19.957 **** 2026-02-18 06:48:40.177592 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177603 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177614 | orchestrator | 2026-02-18 06:48:40.177625 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:48:40.177636 | orchestrator | Wednesday 18 February 2026 06:48:32 +0000 (0:00:01.279) 0:57:21.237 **** 2026-02-18 06:48:40.177646 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177657 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177675 | orchestrator | 2026-02-18 06:48:40.177686 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:48:40.177697 | orchestrator | Wednesday 18 February 2026 06:48:33 +0000 (0:00:01.214) 0:57:22.451 **** 2026-02-18 06:48:40.177708 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177719 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177730 | orchestrator | 2026-02-18 06:48:40.177741 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:48:40.177766 | orchestrator | Wednesday 18 February 2026 06:48:34 +0000 (0:00:01.243) 0:57:23.694 **** 2026-02-18 06:48:40.177777 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177788 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177799 | orchestrator | 2026-02-18 06:48:40.177810 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:48:40.177846 | orchestrator | Wednesday 18 February 2026 06:48:36 +0000 (0:00:01.242) 0:57:24.937 **** 2026-02-18 06:48:40.177859 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177870 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177881 | orchestrator | 2026-02-18 06:48:40.177892 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:48:40.177903 | orchestrator | Wednesday 18 February 2026 06:48:37 +0000 (0:00:01.548) 0:57:26.486 **** 2026-02-18 06:48:40.177913 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177924 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177935 | orchestrator | 2026-02-18 06:48:40.177945 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:48:40.177956 | orchestrator | Wednesday 18 February 2026 06:48:38 +0000 (0:00:01.301) 0:57:27.787 **** 2026-02-18 06:48:40.177967 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:48:40.177978 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:48:40.177989 | orchestrator | 2026-02-18 06:48:40.178007 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:49:26.006261 | orchestrator | Wednesday 18 February 2026 06:48:40 +0000 (0:00:01.250) 0:57:29.038 **** 2026-02-18 06:49:26.006383 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.006400 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.006412 | orchestrator | 2026-02-18 06:49:26.006425 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:49:26.006437 | orchestrator | Wednesday 18 February 2026 06:48:41 +0000 (0:00:01.259) 0:57:30.297 **** 2026-02-18 06:49:26.006449 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.006460 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.006471 | orchestrator | 2026-02-18 06:49:26.006482 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:49:26.006493 | orchestrator | Wednesday 18 February 2026 06:48:42 +0000 (0:00:01.274) 0:57:31.572 **** 2026-02-18 06:49:26.006504 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.006515 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.006526 | orchestrator | 2026-02-18 06:49:26.006537 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:49:26.006548 | orchestrator | Wednesday 18 February 2026 06:48:44 +0000 (0:00:01.363) 0:57:32.935 **** 2026-02-18 06:49:26.006559 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:49:26.006571 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:49:26.006582 | orchestrator | 2026-02-18 06:49:26.006593 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:49:26.006605 | orchestrator | Wednesday 18 February 2026 06:48:46 +0000 (0:00:02.453) 0:57:35.389 **** 2026-02-18 06:49:26.006616 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:49:26.006627 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:49:26.006638 | orchestrator | 2026-02-18 06:49:26.006649 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:49:26.006660 | orchestrator | Wednesday 18 February 2026 06:48:48 +0000 (0:00:02.402) 0:57:37.791 **** 2026-02-18 06:49:26.006697 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4 2026-02-18 06:49:26.006709 | orchestrator | 2026-02-18 06:49:26.006720 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:49:26.006731 | orchestrator | Wednesday 18 February 2026 06:48:50 +0000 (0:00:01.274) 0:57:39.066 **** 2026-02-18 06:49:26.006742 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.006753 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.006764 | orchestrator | 2026-02-18 06:49:26.006775 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:49:26.006786 | orchestrator | Wednesday 18 February 2026 06:48:51 +0000 (0:00:01.269) 0:57:40.336 **** 2026-02-18 06:49:26.006799 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.006812 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.006824 | orchestrator | 2026-02-18 06:49:26.006863 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:49:26.006876 | orchestrator | Wednesday 18 February 2026 06:48:52 +0000 (0:00:01.233) 0:57:41.570 **** 2026-02-18 06:49:26.006889 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:49:26.006900 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:49:26.006911 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:49:26.006922 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:49:26.006933 | orchestrator | 2026-02-18 06:49:26.006944 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:49:26.006954 | orchestrator | Wednesday 18 February 2026 06:48:54 +0000 (0:00:01.990) 0:57:43.561 **** 2026-02-18 06:49:26.006965 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:49:26.006977 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:49:26.006988 | orchestrator | 2026-02-18 06:49:26.006999 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:49:26.007009 | orchestrator | Wednesday 18 February 2026 06:48:56 +0000 (0:00:01.576) 0:57:45.137 **** 2026-02-18 06:49:26.007020 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007031 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007042 | orchestrator | 2026-02-18 06:49:26.007053 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:49:26.007064 | orchestrator | Wednesday 18 February 2026 06:48:57 +0000 (0:00:01.270) 0:57:46.408 **** 2026-02-18 06:49:26.007075 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007085 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007096 | orchestrator | 2026-02-18 06:49:26.007107 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:49:26.007118 | orchestrator | Wednesday 18 February 2026 06:48:58 +0000 (0:00:01.299) 0:57:47.707 **** 2026-02-18 06:49:26.007129 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007140 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007150 | orchestrator | 2026-02-18 06:49:26.007177 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:49:26.007189 | orchestrator | Wednesday 18 February 2026 06:49:00 +0000 (0:00:01.265) 0:57:48.973 **** 2026-02-18 06:49:26.007200 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4 2026-02-18 06:49:26.007211 | orchestrator | 2026-02-18 06:49:26.007221 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:49:26.007232 | orchestrator | Wednesday 18 February 2026 06:49:01 +0000 (0:00:01.259) 0:57:50.232 **** 2026-02-18 06:49:26.007243 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:49:26.007254 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:49:26.007265 | orchestrator | 2026-02-18 06:49:26.007275 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:49:26.007286 | orchestrator | Wednesday 18 February 2026 06:49:03 +0000 (0:00:02.231) 0:57:52.463 **** 2026-02-18 06:49:26.007306 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:49:26.007335 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:49:26.007346 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:49:26.007357 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007368 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:49:26.007379 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:49:26.007390 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:49:26.007401 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007412 | orchestrator | 2026-02-18 06:49:26.007423 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:49:26.007434 | orchestrator | Wednesday 18 February 2026 06:49:05 +0000 (0:00:01.438) 0:57:53.902 **** 2026-02-18 06:49:26.007445 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007455 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007466 | orchestrator | 2026-02-18 06:49:26.007477 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:49:26.007488 | orchestrator | Wednesday 18 February 2026 06:49:06 +0000 (0:00:01.240) 0:57:55.143 **** 2026-02-18 06:49:26.007499 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007510 | orchestrator | 2026-02-18 06:49:26.007521 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:49:26.007531 | orchestrator | Wednesday 18 February 2026 06:49:07 +0000 (0:00:01.168) 0:57:56.311 **** 2026-02-18 06:49:26.007542 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007553 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007564 | orchestrator | 2026-02-18 06:49:26.007574 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:49:26.007585 | orchestrator | Wednesday 18 February 2026 06:49:08 +0000 (0:00:01.280) 0:57:57.592 **** 2026-02-18 06:49:26.007596 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007607 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007618 | orchestrator | 2026-02-18 06:49:26.007628 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:49:26.007639 | orchestrator | Wednesday 18 February 2026 06:49:10 +0000 (0:00:01.315) 0:57:58.907 **** 2026-02-18 06:49:26.007650 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007661 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007672 | orchestrator | 2026-02-18 06:49:26.007683 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:49:26.007694 | orchestrator | Wednesday 18 February 2026 06:49:11 +0000 (0:00:01.242) 0:58:00.149 **** 2026-02-18 06:49:26.007704 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:49:26.007715 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:49:26.007726 | orchestrator | 2026-02-18 06:49:26.007737 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:49:26.007747 | orchestrator | Wednesday 18 February 2026 06:49:14 +0000 (0:00:03.017) 0:58:03.166 **** 2026-02-18 06:49:26.007758 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:49:26.007769 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:49:26.007780 | orchestrator | 2026-02-18 06:49:26.007791 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:49:26.007802 | orchestrator | Wednesday 18 February 2026 06:49:15 +0000 (0:00:01.380) 0:58:04.547 **** 2026-02-18 06:49:26.007813 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4 2026-02-18 06:49:26.007825 | orchestrator | 2026-02-18 06:49:26.007853 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:49:26.007864 | orchestrator | Wednesday 18 February 2026 06:49:16 +0000 (0:00:01.234) 0:58:05.781 **** 2026-02-18 06:49:26.007882 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007893 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007904 | orchestrator | 2026-02-18 06:49:26.007915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:49:26.007926 | orchestrator | Wednesday 18 February 2026 06:49:18 +0000 (0:00:01.358) 0:58:07.140 **** 2026-02-18 06:49:26.007936 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.007947 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.007971 | orchestrator | 2026-02-18 06:49:26.007982 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:49:26.007993 | orchestrator | Wednesday 18 February 2026 06:49:19 +0000 (0:00:01.290) 0:58:08.430 **** 2026-02-18 06:49:26.008004 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.008015 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.008025 | orchestrator | 2026-02-18 06:49:26.008036 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:49:26.008047 | orchestrator | Wednesday 18 February 2026 06:49:20 +0000 (0:00:01.265) 0:58:09.696 **** 2026-02-18 06:49:26.008057 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.008068 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.008079 | orchestrator | 2026-02-18 06:49:26.008096 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:49:26.008107 | orchestrator | Wednesday 18 February 2026 06:49:22 +0000 (0:00:01.262) 0:58:10.958 **** 2026-02-18 06:49:26.008118 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.008129 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.008140 | orchestrator | 2026-02-18 06:49:26.008150 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:49:26.008161 | orchestrator | Wednesday 18 February 2026 06:49:23 +0000 (0:00:01.242) 0:58:12.201 **** 2026-02-18 06:49:26.008172 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.008183 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.008193 | orchestrator | 2026-02-18 06:49:26.008204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:49:26.008215 | orchestrator | Wednesday 18 February 2026 06:49:24 +0000 (0:00:01.244) 0:58:13.445 **** 2026-02-18 06:49:26.008226 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:49:26.008237 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:49:26.008247 | orchestrator | 2026-02-18 06:49:26.008266 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:50:07.821144 | orchestrator | Wednesday 18 February 2026 06:49:25 +0000 (0:00:01.416) 0:58:14.862 **** 2026-02-18 06:50:07.821262 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.821279 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.821290 | orchestrator | 2026-02-18 06:50:07.821303 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:50:07.821316 | orchestrator | Wednesday 18 February 2026 06:49:27 +0000 (0:00:01.269) 0:58:16.132 **** 2026-02-18 06:50:07.821327 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:07.821339 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:07.821350 | orchestrator | 2026-02-18 06:50:07.821361 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:50:07.821372 | orchestrator | Wednesday 18 February 2026 06:49:28 +0000 (0:00:01.467) 0:58:17.600 **** 2026-02-18 06:50:07.821384 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4 2026-02-18 06:50:07.821395 | orchestrator | 2026-02-18 06:50:07.821406 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:50:07.821417 | orchestrator | Wednesday 18 February 2026 06:49:30 +0000 (0:00:01.374) 0:58:18.975 **** 2026-02-18 06:50:07.821428 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-18 06:50:07.821440 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-18 06:50:07.821450 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-18 06:50:07.821485 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-18 06:50:07.821497 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-18 06:50:07.821507 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-18 06:50:07.821518 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-18 06:50:07.821529 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-18 06:50:07.821540 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-18 06:50:07.821550 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-18 06:50:07.821561 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-18 06:50:07.821572 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-18 06:50:07.821582 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-18 06:50:07.821593 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-18 06:50:07.821604 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:50:07.821615 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:50:07.821642 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:50:07.821654 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:50:07.821665 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:50:07.821677 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:50:07.821701 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:50:07.821713 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:50:07.821726 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:50:07.821739 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:50:07.821752 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:50:07.821764 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:50:07.821776 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:50:07.821789 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:50:07.821801 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-18 06:50:07.821815 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-18 06:50:07.821827 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-18 06:50:07.821866 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-18 06:50:07.821887 | orchestrator | 2026-02-18 06:50:07.821906 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:50:07.821926 | orchestrator | Wednesday 18 February 2026 06:49:37 +0000 (0:00:06.935) 0:58:25.911 **** 2026-02-18 06:50:07.821945 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4 2026-02-18 06:50:07.821964 | orchestrator | 2026-02-18 06:50:07.821975 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 06:50:07.821985 | orchestrator | Wednesday 18 February 2026 06:49:38 +0000 (0:00:01.271) 0:58:27.182 **** 2026-02-18 06:50:07.822012 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:50:07.822091 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:50:07.822103 | orchestrator | 2026-02-18 06:50:07.822114 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 06:50:07.822125 | orchestrator | Wednesday 18 February 2026 06:49:39 +0000 (0:00:01.654) 0:58:28.836 **** 2026-02-18 06:50:07.822135 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:50:07.822191 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:50:07.822204 | orchestrator | 2026-02-18 06:50:07.822215 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:50:07.822244 | orchestrator | Wednesday 18 February 2026 06:49:42 +0000 (0:00:02.383) 0:58:31.220 **** 2026-02-18 06:50:07.822256 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822267 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822278 | orchestrator | 2026-02-18 06:50:07.822289 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:50:07.822299 | orchestrator | Wednesday 18 February 2026 06:49:43 +0000 (0:00:01.243) 0:58:32.463 **** 2026-02-18 06:50:07.822310 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822321 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822331 | orchestrator | 2026-02-18 06:50:07.822342 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:50:07.822352 | orchestrator | Wednesday 18 February 2026 06:49:44 +0000 (0:00:01.308) 0:58:33.772 **** 2026-02-18 06:50:07.822363 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822374 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822384 | orchestrator | 2026-02-18 06:50:07.822395 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:50:07.822406 | orchestrator | Wednesday 18 February 2026 06:49:46 +0000 (0:00:01.227) 0:58:34.999 **** 2026-02-18 06:50:07.822417 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822427 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822438 | orchestrator | 2026-02-18 06:50:07.822449 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:50:07.822460 | orchestrator | Wednesday 18 February 2026 06:49:47 +0000 (0:00:01.429) 0:58:36.429 **** 2026-02-18 06:50:07.822470 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822481 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822492 | orchestrator | 2026-02-18 06:50:07.822502 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:50:07.822514 | orchestrator | Wednesday 18 February 2026 06:49:48 +0000 (0:00:01.344) 0:58:37.774 **** 2026-02-18 06:50:07.822525 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822536 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822546 | orchestrator | 2026-02-18 06:50:07.822557 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:50:07.822568 | orchestrator | Wednesday 18 February 2026 06:49:50 +0000 (0:00:01.428) 0:58:39.203 **** 2026-02-18 06:50:07.822579 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822589 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822600 | orchestrator | 2026-02-18 06:50:07.822611 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:50:07.822622 | orchestrator | Wednesday 18 February 2026 06:49:51 +0000 (0:00:01.286) 0:58:40.489 **** 2026-02-18 06:50:07.822632 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822643 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822654 | orchestrator | 2026-02-18 06:50:07.822664 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:50:07.822675 | orchestrator | Wednesday 18 February 2026 06:49:52 +0000 (0:00:01.319) 0:58:41.809 **** 2026-02-18 06:50:07.822686 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822696 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822707 | orchestrator | 2026-02-18 06:50:07.822718 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:50:07.822729 | orchestrator | Wednesday 18 February 2026 06:49:54 +0000 (0:00:01.246) 0:58:43.055 **** 2026-02-18 06:50:07.822739 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822750 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822767 | orchestrator | 2026-02-18 06:50:07.822778 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:50:07.822789 | orchestrator | Wednesday 18 February 2026 06:49:55 +0000 (0:00:01.293) 0:58:44.349 **** 2026-02-18 06:50:07.822800 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:07.822811 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:07.822821 | orchestrator | 2026-02-18 06:50:07.822832 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:50:07.822869 | orchestrator | Wednesday 18 February 2026 06:49:56 +0000 (0:00:01.283) 0:58:45.633 **** 2026-02-18 06:50:07.822882 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:50:07.822900 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:50:07.822920 | orchestrator | 2026-02-18 06:50:07.822938 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:50:07.822954 | orchestrator | Wednesday 18 February 2026 06:50:01 +0000 (0:00:04.542) 0:58:50.176 **** 2026-02-18 06:50:07.822965 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:50:07.822983 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:50:07.822994 | orchestrator | 2026-02-18 06:50:07.823005 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:50:07.823016 | orchestrator | Wednesday 18 February 2026 06:50:02 +0000 (0:00:01.643) 0:58:51.819 **** 2026-02-18 06:50:07.823028 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-18 06:50:07.823051 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-18 06:50:57.116023 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-18 06:50:57.116140 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-18 06:50:57.116157 | orchestrator | 2026-02-18 06:50:57.116171 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:50:57.116184 | orchestrator | Wednesday 18 February 2026 06:50:07 +0000 (0:00:04.859) 0:58:56.679 **** 2026-02-18 06:50:57.116195 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116207 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:57.116218 | orchestrator | 2026-02-18 06:50:57.116229 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:50:57.116240 | orchestrator | Wednesday 18 February 2026 06:50:09 +0000 (0:00:01.307) 0:58:57.987 **** 2026-02-18 06:50:57.116252 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116263 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:57.116274 | orchestrator | 2026-02-18 06:50:57.116287 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:50:57.116325 | orchestrator | Wednesday 18 February 2026 06:50:10 +0000 (0:00:01.274) 0:58:59.261 **** 2026-02-18 06:50:57.116337 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116348 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:57.116359 | orchestrator | 2026-02-18 06:50:57.116370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:50:57.116381 | orchestrator | Wednesday 18 February 2026 06:50:11 +0000 (0:00:01.215) 0:59:00.476 **** 2026-02-18 06:50:57.116392 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116403 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:57.116413 | orchestrator | 2026-02-18 06:50:57.116424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:50:57.116435 | orchestrator | Wednesday 18 February 2026 06:50:12 +0000 (0:00:01.238) 0:59:01.715 **** 2026-02-18 06:50:57.116446 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116457 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:57.116468 | orchestrator | 2026-02-18 06:50:57.116479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:50:57.116490 | orchestrator | Wednesday 18 February 2026 06:50:14 +0000 (0:00:01.231) 0:59:02.947 **** 2026-02-18 06:50:57.116502 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.116516 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.116528 | orchestrator | 2026-02-18 06:50:57.116540 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:50:57.116552 | orchestrator | Wednesday 18 February 2026 06:50:15 +0000 (0:00:01.638) 0:59:04.585 **** 2026-02-18 06:50:57.116565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:50:57.116578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:50:57.116591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:50:57.116603 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116615 | orchestrator | 2026-02-18 06:50:57.116627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:50:57.116639 | orchestrator | Wednesday 18 February 2026 06:50:17 +0000 (0:00:01.460) 0:59:06.046 **** 2026-02-18 06:50:57.116652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:50:57.116665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:50:57.116677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:50:57.116689 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116701 | orchestrator | 2026-02-18 06:50:57.116713 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:50:57.116725 | orchestrator | Wednesday 18 February 2026 06:50:18 +0000 (0:00:01.512) 0:59:07.559 **** 2026-02-18 06:50:57.116737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:50:57.116749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:50:57.116775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:50:57.116788 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.116800 | orchestrator | 2026-02-18 06:50:57.116813 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:50:57.116825 | orchestrator | Wednesday 18 February 2026 06:50:20 +0000 (0:00:01.437) 0:59:08.996 **** 2026-02-18 06:50:57.116837 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.116880 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.116901 | orchestrator | 2026-02-18 06:50:57.116917 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:50:57.116936 | orchestrator | Wednesday 18 February 2026 06:50:21 +0000 (0:00:01.309) 0:59:10.306 **** 2026-02-18 06:50:57.116955 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:50:57.116973 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 06:50:57.116991 | orchestrator | 2026-02-18 06:50:57.117005 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:50:57.117025 | orchestrator | Wednesday 18 February 2026 06:50:22 +0000 (0:00:01.425) 0:59:11.732 **** 2026-02-18 06:50:57.117037 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.117056 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.117073 | orchestrator | 2026-02-18 06:50:57.117114 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-18 06:50:57.117134 | orchestrator | Wednesday 18 February 2026 06:50:24 +0000 (0:00:02.095) 0:59:13.827 **** 2026-02-18 06:50:57.117154 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.117166 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:57.117176 | orchestrator | 2026-02-18 06:50:57.117187 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-18 06:50:57.117198 | orchestrator | Wednesday 18 February 2026 06:50:26 +0000 (0:00:01.258) 0:59:15.086 **** 2026-02-18 06:50:57.117209 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4 2026-02-18 06:50:57.117221 | orchestrator | 2026-02-18 06:50:57.117231 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-18 06:50:57.117242 | orchestrator | Wednesday 18 February 2026 06:50:27 +0000 (0:00:01.296) 0:59:16.383 **** 2026-02-18 06:50:57.117253 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-18 06:50:57.117264 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-18 06:50:57.117274 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-18 06:50:57.117285 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-18 06:50:57.117296 | orchestrator | 2026-02-18 06:50:57.117306 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-18 06:50:57.117317 | orchestrator | Wednesday 18 February 2026 06:50:29 +0000 (0:00:02.021) 0:59:18.405 **** 2026-02-18 06:50:57.117327 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:50:57.117338 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 06:50:57.117349 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:50:57.117360 | orchestrator | 2026-02-18 06:50:57.117370 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:50:57.117381 | orchestrator | Wednesday 18 February 2026 06:50:32 +0000 (0:00:03.210) 0:59:21.616 **** 2026-02-18 06:50:57.117392 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-18 06:50:57.117402 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 06:50:57.117413 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.117424 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-18 06:50:57.117435 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 06:50:57.117445 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.117456 | orchestrator | 2026-02-18 06:50:57.117466 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-18 06:50:57.117477 | orchestrator | Wednesday 18 February 2026 06:50:34 +0000 (0:00:02.056) 0:59:23.672 **** 2026-02-18 06:50:57.117488 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.117499 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.117509 | orchestrator | 2026-02-18 06:50:57.117520 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-18 06:50:57.117531 | orchestrator | Wednesday 18 February 2026 06:50:36 +0000 (0:00:02.038) 0:59:25.711 **** 2026-02-18 06:50:57.117541 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.117552 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:50:57.117563 | orchestrator | 2026-02-18 06:50:57.117573 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-18 06:50:57.117584 | orchestrator | Wednesday 18 February 2026 06:50:38 +0000 (0:00:01.257) 0:59:26.968 **** 2026-02-18 06:50:57.117594 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4 2026-02-18 06:50:57.117605 | orchestrator | 2026-02-18 06:50:57.117624 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-18 06:50:57.117635 | orchestrator | Wednesday 18 February 2026 06:50:39 +0000 (0:00:01.216) 0:59:28.185 **** 2026-02-18 06:50:57.117645 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4 2026-02-18 06:50:57.117656 | orchestrator | 2026-02-18 06:50:57.117666 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-18 06:50:57.117677 | orchestrator | Wednesday 18 February 2026 06:50:40 +0000 (0:00:01.239) 0:59:29.424 **** 2026-02-18 06:50:57.117688 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.117698 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.117709 | orchestrator | 2026-02-18 06:50:57.117720 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-18 06:50:57.117730 | orchestrator | Wednesday 18 February 2026 06:50:43 +0000 (0:00:02.555) 0:59:31.980 **** 2026-02-18 06:50:57.117741 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.117752 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.117762 | orchestrator | 2026-02-18 06:50:57.117773 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-18 06:50:57.117790 | orchestrator | Wednesday 18 February 2026 06:50:45 +0000 (0:00:02.128) 0:59:34.109 **** 2026-02-18 06:50:57.117801 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.117812 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.117822 | orchestrator | 2026-02-18 06:50:57.117833 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-18 06:50:57.117844 | orchestrator | Wednesday 18 February 2026 06:50:47 +0000 (0:00:02.434) 0:59:36.543 **** 2026-02-18 06:50:57.117878 | orchestrator | changed: [testbed-node-3] 2026-02-18 06:50:57.117890 | orchestrator | changed: [testbed-node-4] 2026-02-18 06:50:57.117900 | orchestrator | 2026-02-18 06:50:57.117911 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-18 06:50:57.117922 | orchestrator | Wednesday 18 February 2026 06:50:51 +0000 (0:00:03.497) 0:59:40.040 **** 2026-02-18 06:50:57.117933 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:50:57.117944 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:50:57.117954 | orchestrator | 2026-02-18 06:50:57.117965 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-18 06:50:57.117976 | orchestrator | Wednesday 18 February 2026 06:50:52 +0000 (0:00:01.832) 0:59:41.873 **** 2026-02-18 06:50:57.117987 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:50:57.118004 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:51:20.765530 | orchestrator | 2026-02-18 06:51:20.765644 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-18 06:51:20.765661 | orchestrator | 2026-02-18 06:51:20.765673 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:51:20.765685 | orchestrator | Wednesday 18 February 2026 06:50:57 +0000 (0:00:04.102) 0:59:45.976 **** 2026-02-18 06:51:20.765697 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-18 06:51:20.765708 | orchestrator | 2026-02-18 06:51:20.765718 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:51:20.765729 | orchestrator | Wednesday 18 February 2026 06:50:58 +0000 (0:00:01.132) 0:59:47.109 **** 2026-02-18 06:51:20.765740 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.765752 | orchestrator | 2026-02-18 06:51:20.765764 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:51:20.765775 | orchestrator | Wednesday 18 February 2026 06:50:59 +0000 (0:00:01.546) 0:59:48.656 **** 2026-02-18 06:51:20.765785 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.765796 | orchestrator | 2026-02-18 06:51:20.765807 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:51:20.765818 | orchestrator | Wednesday 18 February 2026 06:51:00 +0000 (0:00:01.209) 0:59:49.865 **** 2026-02-18 06:51:20.765829 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.765839 | orchestrator | 2026-02-18 06:51:20.765944 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:51:20.765960 | orchestrator | Wednesday 18 February 2026 06:51:02 +0000 (0:00:01.465) 0:59:51.331 **** 2026-02-18 06:51:20.765971 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.765982 | orchestrator | 2026-02-18 06:51:20.765993 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:51:20.766004 | orchestrator | Wednesday 18 February 2026 06:51:03 +0000 (0:00:01.179) 0:59:52.511 **** 2026-02-18 06:51:20.766014 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.766082 | orchestrator | 2026-02-18 06:51:20.766094 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:51:20.766107 | orchestrator | Wednesday 18 February 2026 06:51:04 +0000 (0:00:01.165) 0:59:53.677 **** 2026-02-18 06:51:20.766119 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.766131 | orchestrator | 2026-02-18 06:51:20.766144 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:51:20.766158 | orchestrator | Wednesday 18 February 2026 06:51:05 +0000 (0:00:01.171) 0:59:54.848 **** 2026-02-18 06:51:20.766171 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:20.766184 | orchestrator | 2026-02-18 06:51:20.766195 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:51:20.766208 | orchestrator | Wednesday 18 February 2026 06:51:07 +0000 (0:00:01.153) 0:59:56.002 **** 2026-02-18 06:51:20.766220 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.766232 | orchestrator | 2026-02-18 06:51:20.766245 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:51:20.766257 | orchestrator | Wednesday 18 February 2026 06:51:08 +0000 (0:00:01.122) 0:59:57.124 **** 2026-02-18 06:51:20.766270 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:51:20.766283 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:51:20.766295 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:51:20.766307 | orchestrator | 2026-02-18 06:51:20.766320 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:51:20.766333 | orchestrator | Wednesday 18 February 2026 06:51:10 +0000 (0:00:02.092) 0:59:59.217 **** 2026-02-18 06:51:20.766345 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:20.766357 | orchestrator | 2026-02-18 06:51:20.766370 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:51:20.766382 | orchestrator | Wednesday 18 February 2026 06:51:11 +0000 (0:00:01.267) 1:00:00.484 **** 2026-02-18 06:51:20.766396 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:51:20.766414 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:51:20.766431 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:51:20.766444 | orchestrator | 2026-02-18 06:51:20.766456 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:51:20.766466 | orchestrator | Wednesday 18 February 2026 06:51:14 +0000 (0:00:03.255) 1:00:03.740 **** 2026-02-18 06:51:20.766477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 06:51:20.766503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 06:51:20.766517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 06:51:20.766535 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:20.766546 | orchestrator | 2026-02-18 06:51:20.766557 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:51:20.766568 | orchestrator | Wednesday 18 February 2026 06:51:16 +0000 (0:00:01.482) 1:00:05.222 **** 2026-02-18 06:51:20.766581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:51:20.766605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:51:20.766635 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:51:20.766647 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:20.766658 | orchestrator | 2026-02-18 06:51:20.766669 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:51:20.766680 | orchestrator | Wednesday 18 February 2026 06:51:18 +0000 (0:00:01.761) 1:00:06.984 **** 2026-02-18 06:51:20.766693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:20.766708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:20.766719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:20.766730 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:20.766741 | orchestrator | 2026-02-18 06:51:20.766752 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:51:20.766770 | orchestrator | Wednesday 18 February 2026 06:51:19 +0000 (0:00:01.308) 1:00:08.293 **** 2026-02-18 06:51:20.766791 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:51:12.528067', 'end': '2026-02-18 06:51:12.578015', 'delta': '0:00:00.049948', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:51:20.766833 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:51:13.089274', 'end': '2026-02-18 06:51:13.133489', 'delta': '0:00:00.044215', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:51:20.766892 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:51:13.645930', 'end': '2026-02-18 06:51:13.689255', 'delta': '0:00:00.043325', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:51:20.766912 | orchestrator | 2026-02-18 06:51:20.766941 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:51:38.713418 | orchestrator | Wednesday 18 February 2026 06:51:20 +0000 (0:00:01.334) 1:00:09.628 **** 2026-02-18 06:51:38.713494 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:38.713501 | orchestrator | 2026-02-18 06:51:38.713506 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:51:38.713510 | orchestrator | Wednesday 18 February 2026 06:51:22 +0000 (0:00:01.280) 1:00:10.908 **** 2026-02-18 06:51:38.713515 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:38.713519 | orchestrator | 2026-02-18 06:51:38.713523 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:51:38.713527 | orchestrator | Wednesday 18 February 2026 06:51:23 +0000 (0:00:01.315) 1:00:12.224 **** 2026-02-18 06:51:38.713531 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:38.713535 | orchestrator | 2026-02-18 06:51:38.713539 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:51:38.713543 | orchestrator | Wednesday 18 February 2026 06:51:24 +0000 (0:00:01.200) 1:00:13.424 **** 2026-02-18 06:51:38.713547 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:51:38.713551 | orchestrator | 2026-02-18 06:51:38.713555 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:51:38.713558 | orchestrator | Wednesday 18 February 2026 06:51:26 +0000 (0:00:02.112) 1:00:15.537 **** 2026-02-18 06:51:38.713562 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:38.713566 | orchestrator | 2026-02-18 06:51:38.713570 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:51:38.713574 | orchestrator | Wednesday 18 February 2026 06:51:27 +0000 (0:00:01.151) 1:00:16.689 **** 2026-02-18 06:51:38.713577 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:38.713581 | orchestrator | 2026-02-18 06:51:38.713585 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:51:38.713589 | orchestrator | Wednesday 18 February 2026 06:51:28 +0000 (0:00:01.143) 1:00:17.832 **** 2026-02-18 06:51:38.713592 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:38.713596 | orchestrator | 2026-02-18 06:51:38.713600 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:51:38.713604 | orchestrator | Wednesday 18 February 2026 06:51:30 +0000 (0:00:01.239) 1:00:19.071 **** 2026-02-18 06:51:38.713608 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:38.713611 | orchestrator | 2026-02-18 06:51:38.713615 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:51:38.713619 | orchestrator | Wednesday 18 February 2026 06:51:31 +0000 (0:00:01.174) 1:00:20.246 **** 2026-02-18 06:51:38.713623 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:38.713627 | orchestrator | 2026-02-18 06:51:38.713631 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:51:38.713635 | orchestrator | Wednesday 18 February 2026 06:51:32 +0000 (0:00:01.198) 1:00:21.444 **** 2026-02-18 06:51:38.713638 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:38.713642 | orchestrator | 2026-02-18 06:51:38.713646 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:51:38.713664 | orchestrator | Wednesday 18 February 2026 06:51:33 +0000 (0:00:01.188) 1:00:22.633 **** 2026-02-18 06:51:38.713668 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:38.713672 | orchestrator | 2026-02-18 06:51:38.713676 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:51:38.713680 | orchestrator | Wednesday 18 February 2026 06:51:34 +0000 (0:00:01.160) 1:00:23.794 **** 2026-02-18 06:51:38.713683 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:38.713687 | orchestrator | 2026-02-18 06:51:38.713691 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:51:38.713695 | orchestrator | Wednesday 18 February 2026 06:51:36 +0000 (0:00:01.214) 1:00:25.009 **** 2026-02-18 06:51:38.713698 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:38.713702 | orchestrator | 2026-02-18 06:51:38.713706 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:51:38.713710 | orchestrator | Wednesday 18 February 2026 06:51:37 +0000 (0:00:01.162) 1:00:26.172 **** 2026-02-18 06:51:38.713714 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:51:38.713718 | orchestrator | 2026-02-18 06:51:38.713721 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:51:38.713725 | orchestrator | Wednesday 18 February 2026 06:51:38 +0000 (0:00:01.172) 1:00:27.344 **** 2026-02-18 06:51:38.713740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:38.713747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}})  2026-02-18 06:51:38.713763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:51:38.713770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}})  2026-02-18 06:51:38.713779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:38.713783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:38.713787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:51:38.713794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:38.713799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:51:38.713806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:40.074854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}})  2026-02-18 06:51:40.075012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}})  2026-02-18 06:51:40.075059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:40.075095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:51:40.075131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:40.075144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:51:40.075166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:51:40.075179 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:51:40.075192 | orchestrator | 2026-02-18 06:51:40.075204 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:51:40.075217 | orchestrator | Wednesday 18 February 2026 06:51:39 +0000 (0:00:01.371) 1:00:28.716 **** 2026-02-18 06:51:40.075229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:40.075247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2', 'dm-uuid-LVM-b332kKrpd0HVPMS3JaCcbzmkJengbTB3k1xcY4YD1jmAC1hLpfD9Usq2rzLokhXN'], 'uuids': ['b16ba19b-4a40-4954-b96f-45d5ea534fea'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:40.075260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911', 'scsi-SQEMU_QEMU_HARDDISK_3f0eb34d-4d19-41b1-a545-9be91ac9c911'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3f0eb34d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:40.075282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dXXyxK-mZqa-gH2o-thv9-N84c-yWEG-rq5zVz', 'scsi-0QEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f', 'scsi-SQEMU_QEMU_HARDDISK_462a6373-b3e1-4411-8b4b-92b19c9bbd9f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ', 'dm-uuid-CRYPT-LUKS2-a588a620006c41148df487d2b156bd76-BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264749 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31-osd--block--62ce64d1--56ba--5b5c--b13c--8c9d2c247f31', 'dm-uuid-LVM-mc0mcfdX19LcXuP5tIRslJVcyAxcEI1YBONtiusXwPbtgMZykujv5vsiw3hx8dOJ'], 'uuids': ['a588a620-006c-4114-8df4-87d2b156bd76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '462a6373', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['BONtiu-sXwP-btgM-Zyku-jv5v-siw3-hx8dOJ']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Vwxx4l-obxP-mVTy-VoZ3-jA2r-fK3H-p1JcrL', 'scsi-0QEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6', 'scsi-SQEMU_QEMU_HARDDISK_0606cde6-5f5c-43d5-b7e5-8f6931209fa6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0606cde6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c707e11d--d3db--5907--b25a--51e31fa350e2-osd--block--c707e11d--d3db--5907--b25a--51e31fa350e2']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:51:41.264805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b754618', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b754618-e661-413d-92c2-ebb9259de61f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:52:10.460674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:52:10.460794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:52:10.460828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN', 'dm-uuid-CRYPT-LUKS2-b16ba19b4a404954b96f45d5ea534fea-k1xcY4-YD1j-mAC1-hLpf-D9Us-q2rz-LokhXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:52:10.460842 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.460856 | orchestrator | 2026-02-18 06:52:10.460868 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:52:10.460936 | orchestrator | Wednesday 18 February 2026 06:51:41 +0000 (0:00:01.416) 1:00:30.133 **** 2026-02-18 06:52:10.460949 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:52:10.460960 | orchestrator | 2026-02-18 06:52:10.460972 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:52:10.460983 | orchestrator | Wednesday 18 February 2026 06:51:42 +0000 (0:00:01.582) 1:00:31.715 **** 2026-02-18 06:52:10.460993 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:52:10.461004 | orchestrator | 2026-02-18 06:52:10.461015 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:52:10.461026 | orchestrator | Wednesday 18 February 2026 06:51:44 +0000 (0:00:01.199) 1:00:32.915 **** 2026-02-18 06:52:10.461061 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:52:10.461073 | orchestrator | 2026-02-18 06:52:10.461084 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:52:10.461094 | orchestrator | Wednesday 18 February 2026 06:51:45 +0000 (0:00:01.536) 1:00:34.451 **** 2026-02-18 06:52:10.461105 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461116 | orchestrator | 2026-02-18 06:52:10.461127 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:52:10.461137 | orchestrator | Wednesday 18 February 2026 06:51:46 +0000 (0:00:01.172) 1:00:35.624 **** 2026-02-18 06:52:10.461148 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461160 | orchestrator | 2026-02-18 06:52:10.461171 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:52:10.461182 | orchestrator | Wednesday 18 February 2026 06:51:48 +0000 (0:00:01.264) 1:00:36.888 **** 2026-02-18 06:52:10.461193 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461203 | orchestrator | 2026-02-18 06:52:10.461214 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:52:10.461225 | orchestrator | Wednesday 18 February 2026 06:51:49 +0000 (0:00:01.302) 1:00:38.191 **** 2026-02-18 06:52:10.461236 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-18 06:52:10.461247 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-18 06:52:10.461258 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-18 06:52:10.461268 | orchestrator | 2026-02-18 06:52:10.461279 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:52:10.461290 | orchestrator | Wednesday 18 February 2026 06:51:51 +0000 (0:00:01.746) 1:00:39.938 **** 2026-02-18 06:52:10.461301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-18 06:52:10.461311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-18 06:52:10.461322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-18 06:52:10.461333 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461343 | orchestrator | 2026-02-18 06:52:10.461354 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:52:10.461365 | orchestrator | Wednesday 18 February 2026 06:51:52 +0000 (0:00:01.163) 1:00:41.102 **** 2026-02-18 06:52:10.461394 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-18 06:52:10.461406 | orchestrator | 2026-02-18 06:52:10.461418 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:52:10.461430 | orchestrator | Wednesday 18 February 2026 06:51:53 +0000 (0:00:01.118) 1:00:42.221 **** 2026-02-18 06:52:10.461442 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461453 | orchestrator | 2026-02-18 06:52:10.461464 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:52:10.461475 | orchestrator | Wednesday 18 February 2026 06:51:54 +0000 (0:00:01.153) 1:00:43.374 **** 2026-02-18 06:52:10.461486 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461497 | orchestrator | 2026-02-18 06:52:10.461508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:52:10.461519 | orchestrator | Wednesday 18 February 2026 06:51:55 +0000 (0:00:01.230) 1:00:44.604 **** 2026-02-18 06:52:10.461530 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461541 | orchestrator | 2026-02-18 06:52:10.461552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:52:10.461563 | orchestrator | Wednesday 18 February 2026 06:51:56 +0000 (0:00:01.151) 1:00:45.756 **** 2026-02-18 06:52:10.461574 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:52:10.461585 | orchestrator | 2026-02-18 06:52:10.461596 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:52:10.461607 | orchestrator | Wednesday 18 February 2026 06:51:58 +0000 (0:00:01.246) 1:00:47.003 **** 2026-02-18 06:52:10.461626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:52:10.461637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:52:10.461648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:52:10.461659 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461670 | orchestrator | 2026-02-18 06:52:10.461681 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:52:10.461692 | orchestrator | Wednesday 18 February 2026 06:51:59 +0000 (0:00:01.470) 1:00:48.473 **** 2026-02-18 06:52:10.461709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:52:10.461720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:52:10.461731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:52:10.461742 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461753 | orchestrator | 2026-02-18 06:52:10.461764 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:52:10.461775 | orchestrator | Wednesday 18 February 2026 06:52:01 +0000 (0:00:01.416) 1:00:49.890 **** 2026-02-18 06:52:10.461786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:52:10.461797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:52:10.461808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:52:10.461819 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:52:10.461829 | orchestrator | 2026-02-18 06:52:10.461840 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:52:10.461851 | orchestrator | Wednesday 18 February 2026 06:52:02 +0000 (0:00:01.822) 1:00:51.713 **** 2026-02-18 06:52:10.461862 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:52:10.461900 | orchestrator | 2026-02-18 06:52:10.461913 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:52:10.461924 | orchestrator | Wednesday 18 February 2026 06:52:04 +0000 (0:00:01.178) 1:00:52.892 **** 2026-02-18 06:52:10.461936 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:52:10.461955 | orchestrator | 2026-02-18 06:52:10.461972 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:52:10.461991 | orchestrator | Wednesday 18 February 2026 06:52:05 +0000 (0:00:01.859) 1:00:54.752 **** 2026-02-18 06:52:10.462009 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:52:10.462091 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:52:10.462103 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:52:10.462114 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 06:52:10.462125 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:52:10.462135 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:52:10.462146 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:52:10.462157 | orchestrator | 2026-02-18 06:52:10.462168 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:52:10.462178 | orchestrator | Wednesday 18 February 2026 06:52:07 +0000 (0:00:01.851) 1:00:56.603 **** 2026-02-18 06:52:10.462189 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:52:10.462200 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:52:10.462211 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:52:10.462222 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-18 06:52:10.462232 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 06:52:10.462243 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:52:10.462264 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:52:10.462275 | orchestrator | 2026-02-18 06:52:10.462296 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-18 06:53:03.903711 | orchestrator | Wednesday 18 February 2026 06:52:10 +0000 (0:00:02.714) 1:00:59.317 **** 2026-02-18 06:53:03.903828 | orchestrator | changed: [testbed-node-3] 2026-02-18 06:53:03.903845 | orchestrator | 2026-02-18 06:53:03.903859 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-18 06:53:03.903872 | orchestrator | Wednesday 18 February 2026 06:52:12 +0000 (0:00:02.284) 1:01:01.602 **** 2026-02-18 06:53:03.903928 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:53:03.903942 | orchestrator | 2026-02-18 06:53:03.903953 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-18 06:53:03.903965 | orchestrator | Wednesday 18 February 2026 06:52:15 +0000 (0:00:02.863) 1:01:04.466 **** 2026-02-18 06:53:03.903976 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:53:03.903987 | orchestrator | 2026-02-18 06:53:03.903998 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:53:03.904009 | orchestrator | Wednesday 18 February 2026 06:52:17 +0000 (0:00:02.358) 1:01:06.824 **** 2026-02-18 06:53:03.904020 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-18 06:53:03.904031 | orchestrator | 2026-02-18 06:53:03.904042 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:53:03.904053 | orchestrator | Wednesday 18 February 2026 06:52:19 +0000 (0:00:01.188) 1:01:08.013 **** 2026-02-18 06:53:03.904064 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-18 06:53:03.904075 | orchestrator | 2026-02-18 06:53:03.904086 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:53:03.904097 | orchestrator | Wednesday 18 February 2026 06:52:20 +0000 (0:00:01.130) 1:01:09.143 **** 2026-02-18 06:53:03.904107 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904119 | orchestrator | 2026-02-18 06:53:03.904146 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:53:03.904158 | orchestrator | Wednesday 18 February 2026 06:52:21 +0000 (0:00:01.168) 1:01:10.312 **** 2026-02-18 06:53:03.904169 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904180 | orchestrator | 2026-02-18 06:53:03.904191 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:53:03.904203 | orchestrator | Wednesday 18 February 2026 06:52:22 +0000 (0:00:01.532) 1:01:11.845 **** 2026-02-18 06:53:03.904214 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904225 | orchestrator | 2026-02-18 06:53:03.904236 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:53:03.904249 | orchestrator | Wednesday 18 February 2026 06:52:24 +0000 (0:00:01.587) 1:01:13.432 **** 2026-02-18 06:53:03.904263 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904275 | orchestrator | 2026-02-18 06:53:03.904288 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:53:03.904301 | orchestrator | Wednesday 18 February 2026 06:52:26 +0000 (0:00:01.593) 1:01:15.026 **** 2026-02-18 06:53:03.904314 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904326 | orchestrator | 2026-02-18 06:53:03.904339 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:53:03.904352 | orchestrator | Wednesday 18 February 2026 06:52:27 +0000 (0:00:01.144) 1:01:16.170 **** 2026-02-18 06:53:03.904364 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904376 | orchestrator | 2026-02-18 06:53:03.904389 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:53:03.904425 | orchestrator | Wednesday 18 February 2026 06:52:28 +0000 (0:00:01.191) 1:01:17.362 **** 2026-02-18 06:53:03.904438 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904451 | orchestrator | 2026-02-18 06:53:03.904463 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:53:03.904475 | orchestrator | Wednesday 18 February 2026 06:52:29 +0000 (0:00:01.161) 1:01:18.523 **** 2026-02-18 06:53:03.904488 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904501 | orchestrator | 2026-02-18 06:53:03.904514 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:53:03.904526 | orchestrator | Wednesday 18 February 2026 06:52:31 +0000 (0:00:01.538) 1:01:20.062 **** 2026-02-18 06:53:03.904539 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904551 | orchestrator | 2026-02-18 06:53:03.904564 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:53:03.904576 | orchestrator | Wednesday 18 February 2026 06:52:32 +0000 (0:00:01.502) 1:01:21.564 **** 2026-02-18 06:53:03.904589 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904601 | orchestrator | 2026-02-18 06:53:03.904612 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:53:03.904623 | orchestrator | Wednesday 18 February 2026 06:52:33 +0000 (0:00:01.156) 1:01:22.721 **** 2026-02-18 06:53:03.904633 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904644 | orchestrator | 2026-02-18 06:53:03.904655 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:53:03.904666 | orchestrator | Wednesday 18 February 2026 06:52:35 +0000 (0:00:01.159) 1:01:23.881 **** 2026-02-18 06:53:03.904677 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904687 | orchestrator | 2026-02-18 06:53:03.904698 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:53:03.904709 | orchestrator | Wednesday 18 February 2026 06:52:36 +0000 (0:00:01.205) 1:01:25.086 **** 2026-02-18 06:53:03.904720 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904730 | orchestrator | 2026-02-18 06:53:03.904741 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:53:03.904752 | orchestrator | Wednesday 18 February 2026 06:52:37 +0000 (0:00:01.165) 1:01:26.251 **** 2026-02-18 06:53:03.904763 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904773 | orchestrator | 2026-02-18 06:53:03.904802 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:53:03.904814 | orchestrator | Wednesday 18 February 2026 06:52:38 +0000 (0:00:01.191) 1:01:27.443 **** 2026-02-18 06:53:03.904824 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904835 | orchestrator | 2026-02-18 06:53:03.904846 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:53:03.904857 | orchestrator | Wednesday 18 February 2026 06:52:39 +0000 (0:00:01.203) 1:01:28.646 **** 2026-02-18 06:53:03.904868 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904879 | orchestrator | 2026-02-18 06:53:03.904908 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:53:03.904919 | orchestrator | Wednesday 18 February 2026 06:52:40 +0000 (0:00:01.191) 1:01:29.838 **** 2026-02-18 06:53:03.904930 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.904940 | orchestrator | 2026-02-18 06:53:03.904951 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:53:03.904962 | orchestrator | Wednesday 18 February 2026 06:52:42 +0000 (0:00:01.202) 1:01:31.040 **** 2026-02-18 06:53:03.904973 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.904983 | orchestrator | 2026-02-18 06:53:03.904994 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:53:03.905005 | orchestrator | Wednesday 18 February 2026 06:52:43 +0000 (0:00:01.200) 1:01:32.241 **** 2026-02-18 06:53:03.905015 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.905026 | orchestrator | 2026-02-18 06:53:03.905037 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:53:03.905058 | orchestrator | Wednesday 18 February 2026 06:52:44 +0000 (0:00:01.167) 1:01:33.409 **** 2026-02-18 06:53:03.905069 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905080 | orchestrator | 2026-02-18 06:53:03.905091 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:53:03.905101 | orchestrator | Wednesday 18 February 2026 06:52:45 +0000 (0:00:01.145) 1:01:34.554 **** 2026-02-18 06:53:03.905112 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905123 | orchestrator | 2026-02-18 06:53:03.905134 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:53:03.905144 | orchestrator | Wednesday 18 February 2026 06:52:46 +0000 (0:00:01.131) 1:01:35.685 **** 2026-02-18 06:53:03.905155 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905166 | orchestrator | 2026-02-18 06:53:03.905182 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:53:03.905193 | orchestrator | Wednesday 18 February 2026 06:52:47 +0000 (0:00:01.186) 1:01:36.872 **** 2026-02-18 06:53:03.905204 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905214 | orchestrator | 2026-02-18 06:53:03.905225 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:53:03.905238 | orchestrator | Wednesday 18 February 2026 06:52:49 +0000 (0:00:01.213) 1:01:38.085 **** 2026-02-18 06:53:03.905256 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905275 | orchestrator | 2026-02-18 06:53:03.905294 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:53:03.905312 | orchestrator | Wednesday 18 February 2026 06:52:50 +0000 (0:00:01.131) 1:01:39.216 **** 2026-02-18 06:53:03.905332 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905349 | orchestrator | 2026-02-18 06:53:03.905366 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:53:03.905377 | orchestrator | Wednesday 18 February 2026 06:52:51 +0000 (0:00:01.149) 1:01:40.366 **** 2026-02-18 06:53:03.905388 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905399 | orchestrator | 2026-02-18 06:53:03.905410 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:53:03.905422 | orchestrator | Wednesday 18 February 2026 06:52:52 +0000 (0:00:01.125) 1:01:41.492 **** 2026-02-18 06:53:03.905433 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905444 | orchestrator | 2026-02-18 06:53:03.905455 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:53:03.905466 | orchestrator | Wednesday 18 February 2026 06:52:53 +0000 (0:00:01.135) 1:01:42.628 **** 2026-02-18 06:53:03.905476 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905487 | orchestrator | 2026-02-18 06:53:03.905498 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:53:03.905509 | orchestrator | Wednesday 18 February 2026 06:52:54 +0000 (0:00:01.160) 1:01:43.788 **** 2026-02-18 06:53:03.905520 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905531 | orchestrator | 2026-02-18 06:53:03.905542 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:53:03.905553 | orchestrator | Wednesday 18 February 2026 06:52:56 +0000 (0:00:01.274) 1:01:45.063 **** 2026-02-18 06:53:03.905564 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905574 | orchestrator | 2026-02-18 06:53:03.905585 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:53:03.905597 | orchestrator | Wednesday 18 February 2026 06:52:57 +0000 (0:00:01.180) 1:01:46.243 **** 2026-02-18 06:53:03.905615 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:03.905631 | orchestrator | 2026-02-18 06:53:03.905647 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:53:03.905665 | orchestrator | Wednesday 18 February 2026 06:52:58 +0000 (0:00:01.173) 1:01:47.417 **** 2026-02-18 06:53:03.905683 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.905702 | orchestrator | 2026-02-18 06:53:03.905748 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:53:03.905769 | orchestrator | Wednesday 18 February 2026 06:53:00 +0000 (0:00:01.945) 1:01:49.362 **** 2026-02-18 06:53:03.905780 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:03.905791 | orchestrator | 2026-02-18 06:53:03.905802 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:53:03.905824 | orchestrator | Wednesday 18 February 2026 06:53:02 +0000 (0:00:02.286) 1:01:51.649 **** 2026-02-18 06:53:03.905835 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-18 06:53:03.905846 | orchestrator | 2026-02-18 06:53:03.905857 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:53:03.905877 | orchestrator | Wednesday 18 February 2026 06:53:03 +0000 (0:00:01.118) 1:01:52.767 **** 2026-02-18 06:53:50.965115 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965224 | orchestrator | 2026-02-18 06:53:50.965241 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:53:50.965253 | orchestrator | Wednesday 18 February 2026 06:53:05 +0000 (0:00:01.172) 1:01:53.940 **** 2026-02-18 06:53:50.965263 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965273 | orchestrator | 2026-02-18 06:53:50.965283 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:53:50.965294 | orchestrator | Wednesday 18 February 2026 06:53:06 +0000 (0:00:01.160) 1:01:55.101 **** 2026-02-18 06:53:50.965304 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:53:50.965314 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:53:50.965325 | orchestrator | 2026-02-18 06:53:50.965335 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:53:50.965345 | orchestrator | Wednesday 18 February 2026 06:53:08 +0000 (0:00:01.856) 1:01:56.958 **** 2026-02-18 06:53:50.965355 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:50.965366 | orchestrator | 2026-02-18 06:53:50.965376 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:53:50.965385 | orchestrator | Wednesday 18 February 2026 06:53:09 +0000 (0:00:01.463) 1:01:58.422 **** 2026-02-18 06:53:50.965395 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965405 | orchestrator | 2026-02-18 06:53:50.965415 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:53:50.965424 | orchestrator | Wednesday 18 February 2026 06:53:10 +0000 (0:00:01.217) 1:01:59.640 **** 2026-02-18 06:53:50.965434 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965444 | orchestrator | 2026-02-18 06:53:50.965454 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:53:50.965464 | orchestrator | Wednesday 18 February 2026 06:53:11 +0000 (0:00:01.209) 1:02:00.849 **** 2026-02-18 06:53:50.965473 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965483 | orchestrator | 2026-02-18 06:53:50.965493 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:53:50.965519 | orchestrator | Wednesday 18 February 2026 06:53:13 +0000 (0:00:01.190) 1:02:02.040 **** 2026-02-18 06:53:50.965529 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-18 06:53:50.965540 | orchestrator | 2026-02-18 06:53:50.965550 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:53:50.965559 | orchestrator | Wednesday 18 February 2026 06:53:14 +0000 (0:00:01.106) 1:02:03.147 **** 2026-02-18 06:53:50.965569 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:50.965579 | orchestrator | 2026-02-18 06:53:50.965589 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:53:50.965599 | orchestrator | Wednesday 18 February 2026 06:53:15 +0000 (0:00:01.675) 1:02:04.822 **** 2026-02-18 06:53:50.965608 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:53:50.965618 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:53:50.965649 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:53:50.965661 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965673 | orchestrator | 2026-02-18 06:53:50.965684 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:53:50.965695 | orchestrator | Wednesday 18 February 2026 06:53:17 +0000 (0:00:01.159) 1:02:05.981 **** 2026-02-18 06:53:50.965706 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965718 | orchestrator | 2026-02-18 06:53:50.965729 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:53:50.965740 | orchestrator | Wednesday 18 February 2026 06:53:18 +0000 (0:00:01.176) 1:02:07.158 **** 2026-02-18 06:53:50.965751 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965762 | orchestrator | 2026-02-18 06:53:50.965773 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:53:50.965784 | orchestrator | Wednesday 18 February 2026 06:53:19 +0000 (0:00:01.246) 1:02:08.404 **** 2026-02-18 06:53:50.965795 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965806 | orchestrator | 2026-02-18 06:53:50.965817 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:53:50.965828 | orchestrator | Wednesday 18 February 2026 06:53:20 +0000 (0:00:01.154) 1:02:09.558 **** 2026-02-18 06:53:50.965839 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965850 | orchestrator | 2026-02-18 06:53:50.965861 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:53:50.965872 | orchestrator | Wednesday 18 February 2026 06:53:21 +0000 (0:00:01.195) 1:02:10.754 **** 2026-02-18 06:53:50.965883 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.965894 | orchestrator | 2026-02-18 06:53:50.965937 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:53:50.965948 | orchestrator | Wednesday 18 February 2026 06:53:23 +0000 (0:00:01.145) 1:02:11.900 **** 2026-02-18 06:53:50.965959 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:50.965971 | orchestrator | 2026-02-18 06:53:50.965982 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:53:50.965993 | orchestrator | Wednesday 18 February 2026 06:53:25 +0000 (0:00:02.490) 1:02:14.391 **** 2026-02-18 06:53:50.966002 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:50.966012 | orchestrator | 2026-02-18 06:53:50.966114 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:53:50.966126 | orchestrator | Wednesday 18 February 2026 06:53:26 +0000 (0:00:01.183) 1:02:15.574 **** 2026-02-18 06:53:50.966136 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-18 06:53:50.966146 | orchestrator | 2026-02-18 06:53:50.966155 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:53:50.966182 | orchestrator | Wednesday 18 February 2026 06:53:28 +0000 (0:00:01.307) 1:02:16.882 **** 2026-02-18 06:53:50.966192 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966202 | orchestrator | 2026-02-18 06:53:50.966212 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:53:50.966222 | orchestrator | Wednesday 18 February 2026 06:53:29 +0000 (0:00:01.236) 1:02:18.118 **** 2026-02-18 06:53:50.966231 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966241 | orchestrator | 2026-02-18 06:53:50.966250 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:53:50.966260 | orchestrator | Wednesday 18 February 2026 06:53:30 +0000 (0:00:01.229) 1:02:19.348 **** 2026-02-18 06:53:50.966270 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966279 | orchestrator | 2026-02-18 06:53:50.966289 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:53:50.966305 | orchestrator | Wednesday 18 February 2026 06:53:31 +0000 (0:00:01.168) 1:02:20.516 **** 2026-02-18 06:53:50.966319 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966338 | orchestrator | 2026-02-18 06:53:50.966348 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:53:50.966357 | orchestrator | Wednesday 18 February 2026 06:53:32 +0000 (0:00:01.184) 1:02:21.701 **** 2026-02-18 06:53:50.966367 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966377 | orchestrator | 2026-02-18 06:53:50.966386 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:53:50.966396 | orchestrator | Wednesday 18 February 2026 06:53:34 +0000 (0:00:01.187) 1:02:22.888 **** 2026-02-18 06:53:50.966405 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966415 | orchestrator | 2026-02-18 06:53:50.966424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:53:50.966434 | orchestrator | Wednesday 18 February 2026 06:53:35 +0000 (0:00:01.124) 1:02:24.013 **** 2026-02-18 06:53:50.966444 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966453 | orchestrator | 2026-02-18 06:53:50.966462 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:53:50.966472 | orchestrator | Wednesday 18 February 2026 06:53:36 +0000 (0:00:01.172) 1:02:25.185 **** 2026-02-18 06:53:50.966482 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:53:50.966491 | orchestrator | 2026-02-18 06:53:50.966506 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:53:50.966516 | orchestrator | Wednesday 18 February 2026 06:53:37 +0000 (0:00:01.165) 1:02:26.350 **** 2026-02-18 06:53:50.966525 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:53:50.966535 | orchestrator | 2026-02-18 06:53:50.966545 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:53:50.966554 | orchestrator | Wednesday 18 February 2026 06:53:38 +0000 (0:00:01.164) 1:02:27.515 **** 2026-02-18 06:53:50.966564 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-18 06:53:50.966573 | orchestrator | 2026-02-18 06:53:50.966583 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:53:50.966592 | orchestrator | Wednesday 18 February 2026 06:53:39 +0000 (0:00:01.188) 1:02:28.703 **** 2026-02-18 06:53:50.966602 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-18 06:53:50.966612 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-18 06:53:50.966621 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-18 06:53:50.966631 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-18 06:53:50.966640 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-18 06:53:50.966650 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-18 06:53:50.966659 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-18 06:53:50.966669 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:53:50.966678 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:53:50.966688 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:53:50.966698 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:53:50.966707 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:53:50.966717 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:53:50.966726 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:53:50.966736 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-18 06:53:50.966746 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-18 06:53:50.966755 | orchestrator | 2026-02-18 06:53:50.966765 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:53:50.966774 | orchestrator | Wednesday 18 February 2026 06:53:46 +0000 (0:00:06.488) 1:02:35.192 **** 2026-02-18 06:53:50.966784 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-18 06:53:50.966800 | orchestrator | 2026-02-18 06:53:50.966810 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 06:53:50.966819 | orchestrator | Wednesday 18 February 2026 06:53:47 +0000 (0:00:01.145) 1:02:36.337 **** 2026-02-18 06:53:50.966829 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:53:50.966852 | orchestrator | 2026-02-18 06:53:50.966862 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 06:53:50.966872 | orchestrator | Wednesday 18 February 2026 06:53:48 +0000 (0:00:01.510) 1:02:37.848 **** 2026-02-18 06:53:50.966881 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:53:50.966891 | orchestrator | 2026-02-18 06:53:50.966923 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:53:50.966941 | orchestrator | Wednesday 18 February 2026 06:53:50 +0000 (0:00:01.979) 1:02:39.827 **** 2026-02-18 06:54:41.769695 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.769812 | orchestrator | 2026-02-18 06:54:41.769830 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:54:41.769845 | orchestrator | Wednesday 18 February 2026 06:53:52 +0000 (0:00:01.119) 1:02:40.946 **** 2026-02-18 06:54:41.769857 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.769868 | orchestrator | 2026-02-18 06:54:41.769879 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:54:41.769891 | orchestrator | Wednesday 18 February 2026 06:53:53 +0000 (0:00:01.123) 1:02:42.069 **** 2026-02-18 06:54:41.769902 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.769961 | orchestrator | 2026-02-18 06:54:41.769973 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:54:41.769984 | orchestrator | Wednesday 18 February 2026 06:53:54 +0000 (0:00:01.143) 1:02:43.213 **** 2026-02-18 06:54:41.769995 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770007 | orchestrator | 2026-02-18 06:54:41.770071 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:54:41.770084 | orchestrator | Wednesday 18 February 2026 06:53:55 +0000 (0:00:01.151) 1:02:44.364 **** 2026-02-18 06:54:41.770095 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770106 | orchestrator | 2026-02-18 06:54:41.770117 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:54:41.770130 | orchestrator | Wednesday 18 February 2026 06:53:56 +0000 (0:00:01.108) 1:02:45.472 **** 2026-02-18 06:54:41.770141 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770152 | orchestrator | 2026-02-18 06:54:41.770163 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:54:41.770174 | orchestrator | Wednesday 18 February 2026 06:53:57 +0000 (0:00:01.148) 1:02:46.621 **** 2026-02-18 06:54:41.770185 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770196 | orchestrator | 2026-02-18 06:54:41.770207 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:54:41.770232 | orchestrator | Wednesday 18 February 2026 06:53:58 +0000 (0:00:01.144) 1:02:47.766 **** 2026-02-18 06:54:41.770247 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770259 | orchestrator | 2026-02-18 06:54:41.770272 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:54:41.770286 | orchestrator | Wednesday 18 February 2026 06:54:00 +0000 (0:00:01.151) 1:02:48.918 **** 2026-02-18 06:54:41.770298 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770311 | orchestrator | 2026-02-18 06:54:41.770323 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:54:41.770335 | orchestrator | Wednesday 18 February 2026 06:54:01 +0000 (0:00:01.195) 1:02:50.113 **** 2026-02-18 06:54:41.770347 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770388 | orchestrator | 2026-02-18 06:54:41.770401 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:54:41.770415 | orchestrator | Wednesday 18 February 2026 06:54:02 +0000 (0:00:01.201) 1:02:51.315 **** 2026-02-18 06:54:41.770427 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770439 | orchestrator | 2026-02-18 06:54:41.770452 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:54:41.770464 | orchestrator | Wednesday 18 February 2026 06:54:03 +0000 (0:00:01.179) 1:02:52.495 **** 2026-02-18 06:54:41.770477 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:54:41.770489 | orchestrator | 2026-02-18 06:54:41.770501 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:54:41.770513 | orchestrator | Wednesday 18 February 2026 06:54:07 +0000 (0:00:04.252) 1:02:56.747 **** 2026-02-18 06:54:41.770526 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:54:41.770539 | orchestrator | 2026-02-18 06:54:41.770552 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:54:41.770565 | orchestrator | Wednesday 18 February 2026 06:54:09 +0000 (0:00:01.227) 1:02:57.975 **** 2026-02-18 06:54:41.770580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-18 06:54:41.770596 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-18 06:54:41.770608 | orchestrator | 2026-02-18 06:54:41.770619 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:54:41.770630 | orchestrator | Wednesday 18 February 2026 06:54:13 +0000 (0:00:04.859) 1:03:02.835 **** 2026-02-18 06:54:41.770640 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770651 | orchestrator | 2026-02-18 06:54:41.770662 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:54:41.770673 | orchestrator | Wednesday 18 February 2026 06:54:15 +0000 (0:00:01.227) 1:03:04.062 **** 2026-02-18 06:54:41.770684 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770695 | orchestrator | 2026-02-18 06:54:41.770706 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:54:41.770734 | orchestrator | Wednesday 18 February 2026 06:54:16 +0000 (0:00:01.124) 1:03:05.187 **** 2026-02-18 06:54:41.770745 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770756 | orchestrator | 2026-02-18 06:54:41.770767 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:54:41.770778 | orchestrator | Wednesday 18 February 2026 06:54:17 +0000 (0:00:01.139) 1:03:06.327 **** 2026-02-18 06:54:41.770789 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770800 | orchestrator | 2026-02-18 06:54:41.770811 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:54:41.770821 | orchestrator | Wednesday 18 February 2026 06:54:18 +0000 (0:00:01.149) 1:03:07.477 **** 2026-02-18 06:54:41.770832 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.770843 | orchestrator | 2026-02-18 06:54:41.770854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:54:41.770865 | orchestrator | Wednesday 18 February 2026 06:54:19 +0000 (0:00:01.201) 1:03:08.678 **** 2026-02-18 06:54:41.770875 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:54:41.770887 | orchestrator | 2026-02-18 06:54:41.770898 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:54:41.770972 | orchestrator | Wednesday 18 February 2026 06:54:21 +0000 (0:00:01.270) 1:03:09.949 **** 2026-02-18 06:54:41.770991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:54:41.771007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:54:41.771024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:54:41.771040 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.771056 | orchestrator | 2026-02-18 06:54:41.771072 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:54:41.771090 | orchestrator | Wednesday 18 February 2026 06:54:22 +0000 (0:00:01.815) 1:03:11.765 **** 2026-02-18 06:54:41.771110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:54:41.771128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:54:41.771141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:54:41.771159 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.771170 | orchestrator | 2026-02-18 06:54:41.771181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:54:41.771192 | orchestrator | Wednesday 18 February 2026 06:54:24 +0000 (0:00:01.889) 1:03:13.654 **** 2026-02-18 06:54:41.771203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-18 06:54:41.771213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-18 06:54:41.771224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-18 06:54:41.771234 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.771245 | orchestrator | 2026-02-18 06:54:41.771256 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:54:41.771267 | orchestrator | Wednesday 18 February 2026 06:54:26 +0000 (0:00:01.425) 1:03:15.080 **** 2026-02-18 06:54:41.771277 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:54:41.771288 | orchestrator | 2026-02-18 06:54:41.771298 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:54:41.771309 | orchestrator | Wednesday 18 February 2026 06:54:27 +0000 (0:00:01.311) 1:03:16.391 **** 2026-02-18 06:54:41.771320 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-18 06:54:41.771330 | orchestrator | 2026-02-18 06:54:41.771341 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:54:41.771352 | orchestrator | Wednesday 18 February 2026 06:54:28 +0000 (0:00:01.376) 1:03:17.768 **** 2026-02-18 06:54:41.771362 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:54:41.771373 | orchestrator | 2026-02-18 06:54:41.771384 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-18 06:54:41.771395 | orchestrator | Wednesday 18 February 2026 06:54:30 +0000 (0:00:01.816) 1:03:19.584 **** 2026-02-18 06:54:41.771405 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-18 06:54:41.771416 | orchestrator | 2026-02-18 06:54:41.771427 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 06:54:41.771438 | orchestrator | Wednesday 18 February 2026 06:54:32 +0000 (0:00:01.493) 1:03:21.078 **** 2026-02-18 06:54:41.771448 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:54:41.771459 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 06:54:41.771470 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:54:41.771480 | orchestrator | 2026-02-18 06:54:41.771491 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:54:41.771501 | orchestrator | Wednesday 18 February 2026 06:54:35 +0000 (0:00:03.234) 1:03:24.312 **** 2026-02-18 06:54:41.771512 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-18 06:54:41.771523 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-18 06:54:41.771533 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:54:41.771544 | orchestrator | 2026-02-18 06:54:41.771555 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-18 06:54:41.771572 | orchestrator | Wednesday 18 February 2026 06:54:37 +0000 (0:00:01.951) 1:03:26.264 **** 2026-02-18 06:54:41.771583 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:54:41.771594 | orchestrator | 2026-02-18 06:54:41.771605 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-18 06:54:41.771616 | orchestrator | Wednesday 18 February 2026 06:54:38 +0000 (0:00:01.117) 1:03:27.381 **** 2026-02-18 06:54:41.771626 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-18 06:54:41.771638 | orchestrator | 2026-02-18 06:54:41.771649 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-18 06:54:41.771659 | orchestrator | Wednesday 18 February 2026 06:54:40 +0000 (0:00:01.600) 1:03:28.981 **** 2026-02-18 06:54:41.771679 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:55:55.574110 | orchestrator | 2026-02-18 06:55:55.574228 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-18 06:55:55.574247 | orchestrator | Wednesday 18 February 2026 06:54:41 +0000 (0:00:01.651) 1:03:30.633 **** 2026-02-18 06:55:55.574260 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:55:55.574273 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-18 06:55:55.574286 | orchestrator | 2026-02-18 06:55:55.574297 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 06:55:55.574308 | orchestrator | Wednesday 18 February 2026 06:54:46 +0000 (0:00:05.121) 1:03:35.755 **** 2026-02-18 06:55:55.574319 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:55:55.574330 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:55:55.574341 | orchestrator | 2026-02-18 06:55:55.574352 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:55:55.574363 | orchestrator | Wednesday 18 February 2026 06:54:50 +0000 (0:00:03.227) 1:03:38.983 **** 2026-02-18 06:55:55.574374 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-18 06:55:55.574386 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:55:55.574398 | orchestrator | 2026-02-18 06:55:55.574409 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-18 06:55:55.574420 | orchestrator | Wednesday 18 February 2026 06:54:52 +0000 (0:00:02.026) 1:03:41.010 **** 2026-02-18 06:55:55.574431 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-18 06:55:55.574441 | orchestrator | 2026-02-18 06:55:55.574452 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-18 06:55:55.574463 | orchestrator | Wednesday 18 February 2026 06:54:53 +0000 (0:00:01.561) 1:03:42.572 **** 2026-02-18 06:55:55.574490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574546 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:55:55.574559 | orchestrator | 2026-02-18 06:55:55.574571 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-18 06:55:55.574606 | orchestrator | Wednesday 18 February 2026 06:54:55 +0000 (0:00:01.653) 1:03:44.226 **** 2026-02-18 06:55:55.574619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:55:55.574678 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:55:55.574689 | orchestrator | 2026-02-18 06:55:55.574700 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-18 06:55:55.574711 | orchestrator | Wednesday 18 February 2026 06:54:56 +0000 (0:00:01.588) 1:03:45.815 **** 2026-02-18 06:55:55.574722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 06:55:55.574734 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 06:55:55.574745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 06:55:55.574756 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 06:55:55.574767 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 06:55:55.574778 | orchestrator | 2026-02-18 06:55:55.574789 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-18 06:55:55.574817 | orchestrator | Wednesday 18 February 2026 06:55:28 +0000 (0:00:31.139) 1:04:16.954 **** 2026-02-18 06:55:55.574829 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:55:55.574840 | orchestrator | 2026-02-18 06:55:55.574851 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-18 06:55:55.574862 | orchestrator | Wednesday 18 February 2026 06:55:29 +0000 (0:00:01.159) 1:04:18.113 **** 2026-02-18 06:55:55.574873 | orchestrator | skipping: [testbed-node-3] 2026-02-18 06:55:55.574884 | orchestrator | 2026-02-18 06:55:55.574895 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-18 06:55:55.574906 | orchestrator | Wednesday 18 February 2026 06:55:30 +0000 (0:00:01.103) 1:04:19.216 **** 2026-02-18 06:55:55.574940 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-18 06:55:55.574952 | orchestrator | 2026-02-18 06:55:55.574963 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-18 06:55:55.574973 | orchestrator | Wednesday 18 February 2026 06:55:31 +0000 (0:00:01.642) 1:04:20.859 **** 2026-02-18 06:55:55.574984 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-18 06:55:55.574995 | orchestrator | 2026-02-18 06:55:55.575006 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-18 06:55:55.575017 | orchestrator | Wednesday 18 February 2026 06:55:33 +0000 (0:00:01.499) 1:04:22.358 **** 2026-02-18 06:55:55.575027 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:55:55.575039 | orchestrator | 2026-02-18 06:55:55.575049 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-18 06:55:55.575060 | orchestrator | Wednesday 18 February 2026 06:55:35 +0000 (0:00:02.067) 1:04:24.426 **** 2026-02-18 06:55:55.575080 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:55:55.575091 | orchestrator | 2026-02-18 06:55:55.575102 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-18 06:55:55.575113 | orchestrator | Wednesday 18 February 2026 06:55:37 +0000 (0:00:01.931) 1:04:26.357 **** 2026-02-18 06:55:55.575123 | orchestrator | ok: [testbed-node-3] 2026-02-18 06:55:55.575134 | orchestrator | 2026-02-18 06:55:55.575150 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-18 06:55:55.575162 | orchestrator | Wednesday 18 February 2026 06:55:39 +0000 (0:00:02.266) 1:04:28.624 **** 2026-02-18 06:55:55.575173 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-18 06:55:55.575184 | orchestrator | 2026-02-18 06:55:55.575202 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-18 06:55:55.575220 | orchestrator | 2026-02-18 06:55:55.575248 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 06:55:55.575267 | orchestrator | Wednesday 18 February 2026 06:55:42 +0000 (0:00:02.778) 1:04:31.402 **** 2026-02-18 06:55:55.575285 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-18 06:55:55.575302 | orchestrator | 2026-02-18 06:55:55.575319 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 06:55:55.575336 | orchestrator | Wednesday 18 February 2026 06:55:43 +0000 (0:00:01.207) 1:04:32.610 **** 2026-02-18 06:55:55.575355 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:55:55.575374 | orchestrator | 2026-02-18 06:55:55.575391 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 06:55:55.575410 | orchestrator | Wednesday 18 February 2026 06:55:45 +0000 (0:00:01.605) 1:04:34.215 **** 2026-02-18 06:55:55.575429 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:55:55.575448 | orchestrator | 2026-02-18 06:55:55.575467 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 06:55:55.575485 | orchestrator | Wednesday 18 February 2026 06:55:46 +0000 (0:00:01.114) 1:04:35.330 **** 2026-02-18 06:55:55.575503 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:55:55.575523 | orchestrator | 2026-02-18 06:55:55.575542 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 06:55:55.575561 | orchestrator | Wednesday 18 February 2026 06:55:47 +0000 (0:00:01.480) 1:04:36.811 **** 2026-02-18 06:55:55.575580 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:55:55.575599 | orchestrator | 2026-02-18 06:55:55.575617 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 06:55:55.575636 | orchestrator | Wednesday 18 February 2026 06:55:49 +0000 (0:00:01.207) 1:04:38.018 **** 2026-02-18 06:55:55.575655 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:55:55.575674 | orchestrator | 2026-02-18 06:55:55.575694 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 06:55:55.575712 | orchestrator | Wednesday 18 February 2026 06:55:50 +0000 (0:00:01.269) 1:04:39.288 **** 2026-02-18 06:55:55.575731 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:55:55.575750 | orchestrator | 2026-02-18 06:55:55.575769 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 06:55:55.575787 | orchestrator | Wednesday 18 February 2026 06:55:51 +0000 (0:00:01.175) 1:04:40.464 **** 2026-02-18 06:55:55.575805 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:55:55.575824 | orchestrator | 2026-02-18 06:55:55.575842 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 06:55:55.575860 | orchestrator | Wednesday 18 February 2026 06:55:52 +0000 (0:00:01.136) 1:04:41.600 **** 2026-02-18 06:55:55.575879 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:55:55.575899 | orchestrator | 2026-02-18 06:55:55.575973 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 06:55:55.575995 | orchestrator | Wednesday 18 February 2026 06:55:53 +0000 (0:00:01.134) 1:04:42.735 **** 2026-02-18 06:55:55.576029 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:55:55.576049 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:55:55.576068 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:55:55.576083 | orchestrator | 2026-02-18 06:55:55.576094 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 06:55:55.576116 | orchestrator | Wednesday 18 February 2026 06:55:55 +0000 (0:00:01.699) 1:04:44.434 **** 2026-02-18 06:56:22.180398 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:22.180503 | orchestrator | 2026-02-18 06:56:22.180518 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 06:56:22.180530 | orchestrator | Wednesday 18 February 2026 06:55:56 +0000 (0:00:01.251) 1:04:45.685 **** 2026-02-18 06:56:22.180540 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:56:22.180551 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:56:22.180561 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:56:22.180571 | orchestrator | 2026-02-18 06:56:22.180581 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 06:56:22.180591 | orchestrator | Wednesday 18 February 2026 06:56:00 +0000 (0:00:03.385) 1:04:49.072 **** 2026-02-18 06:56:22.180601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 06:56:22.180611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 06:56:22.180621 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 06:56:22.180630 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.180640 | orchestrator | 2026-02-18 06:56:22.180650 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 06:56:22.180660 | orchestrator | Wednesday 18 February 2026 06:56:01 +0000 (0:00:01.458) 1:04:50.530 **** 2026-02-18 06:56:22.180672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 06:56:22.180707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 06:56:22.180723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 06:56:22.180741 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.180758 | orchestrator | 2026-02-18 06:56:22.180775 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 06:56:22.180792 | orchestrator | Wednesday 18 February 2026 06:56:03 +0000 (0:00:02.048) 1:04:52.579 **** 2026-02-18 06:56:22.180812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:22.180832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:22.180875 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:22.180893 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.180910 | orchestrator | 2026-02-18 06:56:22.180974 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 06:56:22.180993 | orchestrator | Wednesday 18 February 2026 06:56:04 +0000 (0:00:01.159) 1:04:53.739 **** 2026-02-18 06:56:22.181037 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 06:55:57.376674', 'end': '2026-02-18 06:55:57.421353', 'delta': '0:00:00.044679', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 06:56:22.181061 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 06:55:57.972430', 'end': '2026-02-18 06:55:58.019374', 'delta': '0:00:00.046944', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 06:56:22.181088 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 06:55:58.906501', 'end': '2026-02-18 06:55:58.958164', 'delta': '0:00:00.051663', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 06:56:22.181108 | orchestrator | 2026-02-18 06:56:22.181126 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 06:56:22.181146 | orchestrator | Wednesday 18 February 2026 06:56:06 +0000 (0:00:01.292) 1:04:55.032 **** 2026-02-18 06:56:22.181165 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:22.181183 | orchestrator | 2026-02-18 06:56:22.181202 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 06:56:22.181220 | orchestrator | Wednesday 18 February 2026 06:56:08 +0000 (0:00:01.849) 1:04:56.881 **** 2026-02-18 06:56:22.181239 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.181257 | orchestrator | 2026-02-18 06:56:22.181274 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 06:56:22.181291 | orchestrator | Wednesday 18 February 2026 06:56:09 +0000 (0:00:01.417) 1:04:58.299 **** 2026-02-18 06:56:22.181309 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:22.181340 | orchestrator | 2026-02-18 06:56:22.181357 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 06:56:22.181373 | orchestrator | Wednesday 18 February 2026 06:56:10 +0000 (0:00:01.184) 1:04:59.483 **** 2026-02-18 06:56:22.181389 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-18 06:56:22.181405 | orchestrator | 2026-02-18 06:56:22.181421 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:56:22.181437 | orchestrator | Wednesday 18 February 2026 06:56:12 +0000 (0:00:02.010) 1:05:01.494 **** 2026-02-18 06:56:22.181453 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:22.181469 | orchestrator | 2026-02-18 06:56:22.181486 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 06:56:22.181502 | orchestrator | Wednesday 18 February 2026 06:56:13 +0000 (0:00:01.205) 1:05:02.700 **** 2026-02-18 06:56:22.181517 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.181533 | orchestrator | 2026-02-18 06:56:22.181549 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 06:56:22.181565 | orchestrator | Wednesday 18 February 2026 06:56:14 +0000 (0:00:01.162) 1:05:03.862 **** 2026-02-18 06:56:22.181582 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.181599 | orchestrator | 2026-02-18 06:56:22.181614 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 06:56:22.181631 | orchestrator | Wednesday 18 February 2026 06:56:16 +0000 (0:00:01.290) 1:05:05.153 **** 2026-02-18 06:56:22.181646 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.181663 | orchestrator | 2026-02-18 06:56:22.181676 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 06:56:22.181686 | orchestrator | Wednesday 18 February 2026 06:56:17 +0000 (0:00:01.165) 1:05:06.318 **** 2026-02-18 06:56:22.181695 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.181705 | orchestrator | 2026-02-18 06:56:22.181715 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 06:56:22.181724 | orchestrator | Wednesday 18 February 2026 06:56:18 +0000 (0:00:01.142) 1:05:07.460 **** 2026-02-18 06:56:22.181734 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:22.181744 | orchestrator | 2026-02-18 06:56:22.181754 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 06:56:22.181763 | orchestrator | Wednesday 18 February 2026 06:56:19 +0000 (0:00:01.223) 1:05:08.684 **** 2026-02-18 06:56:22.181773 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:22.181782 | orchestrator | 2026-02-18 06:56:22.181792 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 06:56:22.181802 | orchestrator | Wednesday 18 February 2026 06:56:20 +0000 (0:00:01.135) 1:05:09.819 **** 2026-02-18 06:56:22.181811 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:22.181821 | orchestrator | 2026-02-18 06:56:22.181831 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 06:56:22.181854 | orchestrator | Wednesday 18 February 2026 06:56:22 +0000 (0:00:01.223) 1:05:11.042 **** 2026-02-18 06:56:24.868674 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:24.868782 | orchestrator | 2026-02-18 06:56:24.868801 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 06:56:24.868816 | orchestrator | Wednesday 18 February 2026 06:56:23 +0000 (0:00:01.145) 1:05:12.187 **** 2026-02-18 06:56:24.868831 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:24.868843 | orchestrator | 2026-02-18 06:56:24.868854 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 06:56:24.868865 | orchestrator | Wednesday 18 February 2026 06:56:24 +0000 (0:00:01.296) 1:05:13.484 **** 2026-02-18 06:56:24.868881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:24.868970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}})  2026-02-18 06:56:24.868989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:56:24.869002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}})  2026-02-18 06:56:24.869014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:24.869027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:24.869060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 06:56:24.869073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:24.869100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:56:24.869114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:24.869127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}})  2026-02-18 06:56:24.869140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}})  2026-02-18 06:56:24.869152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:24.869185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 06:56:26.291599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:26.291684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 06:56:26.291695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 06:56:26.291705 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:26.291713 | orchestrator | 2026-02-18 06:56:26.291721 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 06:56:26.291729 | orchestrator | Wednesday 18 February 2026 06:56:26 +0000 (0:00:01.420) 1:05:14.904 **** 2026-02-18 06:56:26.291736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1', 'dm-uuid-LVM-Pocq4oUbLWP00qFUaSC7iedZwJvFlkM7R9wKXfhCtaSVyAYyhVqEuBX0r2AiTW17'], 'uuids': ['979a0cee-d595-4490-b8ce-61c0ee691ca0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b', 'scsi-SQEMU_QEMU_HARDDISK_c4d92644-33e6-4467-94f0-587e390b3e2b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4d92644', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2oYHmj-qVpe-TkqB-ys3g-HxnL-TPEI-xGi69f', 'scsi-0QEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3', 'scsi-SQEMU_QEMU_HARDDISK_d8cf58e5-ac4f-4786-ab18-80916d08d0f3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:26.291854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF', 'dm-uuid-CRYPT-LUKS2-618550ddd31f436ab0c76e785ef9ce84-Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646376 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ef111f9--34b8--55e5--9a40--00a35805e906-osd--block--8ef111f9--34b8--55e5--9a40--00a35805e906', 'dm-uuid-LVM-s1GxD985whXs1zhfI3pfkiqWvGxQhvPqGgid6IaTf4Xhm4q7Tza11pYwYNSF0UIF'], 'uuids': ['618550dd-d31f-436a-b0c7-6e785ef9ce84'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd8cf58e5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Ggid6I-aTf4-Xhm4-q7Tz-a11p-YwYN-SF0UIF']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cms17u-KNuP-Trp0-HJHx-2cLm-1FUj-BEKkfW', 'scsi-0QEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19', 'scsi-SQEMU_QEMU_HARDDISK_f0ab076a-73e2-49a0-ad75-65c4c5564b19'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f0ab076a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--47b33137--1c4f--52d4--af64--ebc2c48f95b1-osd--block--47b33137--1c4f--52d4--af64--ebc2c48f95b1']}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f33eab1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1', 'scsi-SQEMU_QEMU_HARDDISK_f33eab1c-67cb-4270-8b47-8509ec50b93a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17', 'dm-uuid-CRYPT-LUKS2-979a0ceed5954490b8ce61c0ee691ca0-R9wKXf-hCta-SVyA-YyhV-qEuB-X0r2-AiTW17'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 06:56:31.646484 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:56:31.646492 | orchestrator | 2026-02-18 06:56:31.646500 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 06:56:31.646511 | orchestrator | Wednesday 18 February 2026 06:56:27 +0000 (0:00:01.430) 1:05:16.335 **** 2026-02-18 06:56:31.646517 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:31.646525 | orchestrator | 2026-02-18 06:56:31.646531 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 06:56:31.646537 | orchestrator | Wednesday 18 February 2026 06:56:28 +0000 (0:00:01.501) 1:05:17.836 **** 2026-02-18 06:56:31.646543 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:31.646550 | orchestrator | 2026-02-18 06:56:31.646556 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:56:31.646562 | orchestrator | Wednesday 18 February 2026 06:56:30 +0000 (0:00:01.146) 1:05:18.982 **** 2026-02-18 06:56:31.646568 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:56:31.646574 | orchestrator | 2026-02-18 06:56:31.646580 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:56:31.646590 | orchestrator | Wednesday 18 February 2026 06:56:31 +0000 (0:00:01.532) 1:05:20.515 **** 2026-02-18 06:57:14.333450 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.333566 | orchestrator | 2026-02-18 06:57:14.333583 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 06:57:14.333597 | orchestrator | Wednesday 18 February 2026 06:56:32 +0000 (0:00:01.167) 1:05:21.683 **** 2026-02-18 06:57:14.333608 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.333619 | orchestrator | 2026-02-18 06:57:14.333630 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 06:57:14.333641 | orchestrator | Wednesday 18 February 2026 06:56:34 +0000 (0:00:01.280) 1:05:22.963 **** 2026-02-18 06:57:14.333652 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.333663 | orchestrator | 2026-02-18 06:57:14.333674 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 06:57:14.333685 | orchestrator | Wednesday 18 February 2026 06:56:35 +0000 (0:00:01.205) 1:05:24.169 **** 2026-02-18 06:57:14.333696 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-18 06:57:14.333707 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-18 06:57:14.333718 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-18 06:57:14.333729 | orchestrator | 2026-02-18 06:57:14.333739 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 06:57:14.333750 | orchestrator | Wednesday 18 February 2026 06:56:37 +0000 (0:00:02.015) 1:05:26.184 **** 2026-02-18 06:57:14.333786 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-18 06:57:14.333798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-18 06:57:14.333809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-18 06:57:14.333819 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.333830 | orchestrator | 2026-02-18 06:57:14.333841 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 06:57:14.333852 | orchestrator | Wednesday 18 February 2026 06:56:38 +0000 (0:00:01.183) 1:05:27.368 **** 2026-02-18 06:57:14.333862 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-18 06:57:14.333874 | orchestrator | 2026-02-18 06:57:14.333886 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:57:14.333898 | orchestrator | Wednesday 18 February 2026 06:56:39 +0000 (0:00:01.151) 1:05:28.519 **** 2026-02-18 06:57:14.333908 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.333919 | orchestrator | 2026-02-18 06:57:14.333960 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:57:14.333972 | orchestrator | Wednesday 18 February 2026 06:56:40 +0000 (0:00:01.231) 1:05:29.751 **** 2026-02-18 06:57:14.333984 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.333996 | orchestrator | 2026-02-18 06:57:14.334008 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:57:14.334085 | orchestrator | Wednesday 18 February 2026 06:56:42 +0000 (0:00:01.135) 1:05:30.886 **** 2026-02-18 06:57:14.334099 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.334111 | orchestrator | 2026-02-18 06:57:14.334123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:57:14.334135 | orchestrator | Wednesday 18 February 2026 06:56:43 +0000 (0:00:01.165) 1:05:32.051 **** 2026-02-18 06:57:14.334147 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:14.334159 | orchestrator | 2026-02-18 06:57:14.334172 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:57:14.334184 | orchestrator | Wednesday 18 February 2026 06:56:44 +0000 (0:00:01.245) 1:05:33.297 **** 2026-02-18 06:57:14.334196 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:57:14.334208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:57:14.334221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:57:14.334232 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.334244 | orchestrator | 2026-02-18 06:57:14.334256 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:57:14.334268 | orchestrator | Wednesday 18 February 2026 06:56:45 +0000 (0:00:01.495) 1:05:34.792 **** 2026-02-18 06:57:14.334281 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:57:14.334293 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:57:14.334305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:57:14.334317 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.334329 | orchestrator | 2026-02-18 06:57:14.334341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:57:14.334352 | orchestrator | Wednesday 18 February 2026 06:56:47 +0000 (0:00:01.491) 1:05:36.285 **** 2026-02-18 06:57:14.334363 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:57:14.334373 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:57:14.334384 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:57:14.334394 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.334405 | orchestrator | 2026-02-18 06:57:14.334429 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:57:14.334441 | orchestrator | Wednesday 18 February 2026 06:56:48 +0000 (0:00:01.455) 1:05:37.740 **** 2026-02-18 06:57:14.334461 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:14.334472 | orchestrator | 2026-02-18 06:57:14.334482 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:57:14.334493 | orchestrator | Wednesday 18 February 2026 06:56:50 +0000 (0:00:01.242) 1:05:38.983 **** 2026-02-18 06:57:14.334504 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 06:57:14.334515 | orchestrator | 2026-02-18 06:57:14.334525 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 06:57:14.334536 | orchestrator | Wednesday 18 February 2026 06:56:51 +0000 (0:00:01.397) 1:05:40.381 **** 2026-02-18 06:57:14.334565 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:57:14.334577 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:57:14.334587 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:57:14.334598 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:57:14.334609 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-18 06:57:14.334619 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:57:14.334630 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:57:14.334640 | orchestrator | 2026-02-18 06:57:14.334651 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 06:57:14.334662 | orchestrator | Wednesday 18 February 2026 06:56:53 +0000 (0:00:02.189) 1:05:42.570 **** 2026-02-18 06:57:14.334672 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 06:57:14.334769 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 06:57:14.334782 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 06:57:14.334792 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 06:57:14.334803 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-18 06:57:14.334814 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-18 06:57:14.334825 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 06:57:14.334835 | orchestrator | 2026-02-18 06:57:14.334846 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-18 06:57:14.334857 | orchestrator | Wednesday 18 February 2026 06:56:56 +0000 (0:00:02.386) 1:05:44.957 **** 2026-02-18 06:57:14.334868 | orchestrator | changed: [testbed-node-4] 2026-02-18 06:57:14.334879 | orchestrator | 2026-02-18 06:57:14.334889 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-18 06:57:14.334900 | orchestrator | Wednesday 18 February 2026 06:56:58 +0000 (0:00:01.931) 1:05:46.889 **** 2026-02-18 06:57:14.334911 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:57:14.334922 | orchestrator | 2026-02-18 06:57:14.334951 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-18 06:57:14.334962 | orchestrator | Wednesday 18 February 2026 06:57:00 +0000 (0:00:02.714) 1:05:49.603 **** 2026-02-18 06:57:14.334973 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:57:14.334983 | orchestrator | 2026-02-18 06:57:14.334994 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 06:57:14.335005 | orchestrator | Wednesday 18 February 2026 06:57:02 +0000 (0:00:02.011) 1:05:51.615 **** 2026-02-18 06:57:14.335016 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-18 06:57:14.335027 | orchestrator | 2026-02-18 06:57:14.335048 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 06:57:14.335059 | orchestrator | Wednesday 18 February 2026 06:57:03 +0000 (0:00:01.158) 1:05:52.773 **** 2026-02-18 06:57:14.335070 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-18 06:57:14.335080 | orchestrator | 2026-02-18 06:57:14.335091 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 06:57:14.335102 | orchestrator | Wednesday 18 February 2026 06:57:05 +0000 (0:00:01.146) 1:05:53.920 **** 2026-02-18 06:57:14.335113 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.335123 | orchestrator | 2026-02-18 06:57:14.335134 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 06:57:14.335145 | orchestrator | Wednesday 18 February 2026 06:57:06 +0000 (0:00:01.180) 1:05:55.101 **** 2026-02-18 06:57:14.335156 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:14.335166 | orchestrator | 2026-02-18 06:57:14.335177 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 06:57:14.335188 | orchestrator | Wednesday 18 February 2026 06:57:07 +0000 (0:00:01.515) 1:05:56.616 **** 2026-02-18 06:57:14.335199 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:14.335210 | orchestrator | 2026-02-18 06:57:14.335220 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 06:57:14.335231 | orchestrator | Wednesday 18 February 2026 06:57:09 +0000 (0:00:01.540) 1:05:58.157 **** 2026-02-18 06:57:14.335242 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:14.335253 | orchestrator | 2026-02-18 06:57:14.335264 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 06:57:14.335282 | orchestrator | Wednesday 18 February 2026 06:57:10 +0000 (0:00:01.573) 1:05:59.730 **** 2026-02-18 06:57:14.335293 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.335304 | orchestrator | 2026-02-18 06:57:14.335314 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 06:57:14.335325 | orchestrator | Wednesday 18 February 2026 06:57:11 +0000 (0:00:01.136) 1:06:00.867 **** 2026-02-18 06:57:14.335336 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.335347 | orchestrator | 2026-02-18 06:57:14.335358 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 06:57:14.335369 | orchestrator | Wednesday 18 February 2026 06:57:13 +0000 (0:00:01.129) 1:06:01.997 **** 2026-02-18 06:57:14.335380 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:14.335391 | orchestrator | 2026-02-18 06:57:14.335402 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 06:57:14.335426 | orchestrator | Wednesday 18 February 2026 06:57:14 +0000 (0:00:01.202) 1:06:03.200 **** 2026-02-18 06:57:55.119396 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.119515 | orchestrator | 2026-02-18 06:57:55.119532 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 06:57:55.119545 | orchestrator | Wednesday 18 February 2026 06:57:15 +0000 (0:00:01.587) 1:06:04.787 **** 2026-02-18 06:57:55.119556 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.119567 | orchestrator | 2026-02-18 06:57:55.119578 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 06:57:55.119590 | orchestrator | Wednesday 18 February 2026 06:57:17 +0000 (0:00:01.997) 1:06:06.784 **** 2026-02-18 06:57:55.119601 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.119612 | orchestrator | 2026-02-18 06:57:55.119623 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 06:57:55.119634 | orchestrator | Wednesday 18 February 2026 06:57:18 +0000 (0:00:00.822) 1:06:07.607 **** 2026-02-18 06:57:55.119646 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.119657 | orchestrator | 2026-02-18 06:57:55.119668 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 06:57:55.119679 | orchestrator | Wednesday 18 February 2026 06:57:19 +0000 (0:00:00.805) 1:06:08.412 **** 2026-02-18 06:57:55.119690 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.119727 | orchestrator | 2026-02-18 06:57:55.119739 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 06:57:55.119750 | orchestrator | Wednesday 18 February 2026 06:57:20 +0000 (0:00:00.815) 1:06:09.228 **** 2026-02-18 06:57:55.119761 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.119771 | orchestrator | 2026-02-18 06:57:55.119782 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 06:57:55.119793 | orchestrator | Wednesday 18 February 2026 06:57:21 +0000 (0:00:00.804) 1:06:10.033 **** 2026-02-18 06:57:55.119804 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.119815 | orchestrator | 2026-02-18 06:57:55.119826 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 06:57:55.119836 | orchestrator | Wednesday 18 February 2026 06:57:21 +0000 (0:00:00.799) 1:06:10.833 **** 2026-02-18 06:57:55.119847 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.119858 | orchestrator | 2026-02-18 06:57:55.119869 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 06:57:55.119879 | orchestrator | Wednesday 18 February 2026 06:57:22 +0000 (0:00:00.780) 1:06:11.614 **** 2026-02-18 06:57:55.119890 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.119900 | orchestrator | 2026-02-18 06:57:55.119911 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 06:57:55.119922 | orchestrator | Wednesday 18 February 2026 06:57:23 +0000 (0:00:00.817) 1:06:12.431 **** 2026-02-18 06:57:55.119966 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.119980 | orchestrator | 2026-02-18 06:57:55.119992 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 06:57:55.120005 | orchestrator | Wednesday 18 February 2026 06:57:24 +0000 (0:00:00.819) 1:06:13.251 **** 2026-02-18 06:57:55.120018 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.120030 | orchestrator | 2026-02-18 06:57:55.120044 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 06:57:55.120063 | orchestrator | Wednesday 18 February 2026 06:57:25 +0000 (0:00:00.827) 1:06:14.079 **** 2026-02-18 06:57:55.120091 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.120110 | orchestrator | 2026-02-18 06:57:55.120129 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 06:57:55.120147 | orchestrator | Wednesday 18 February 2026 06:57:25 +0000 (0:00:00.790) 1:06:14.869 **** 2026-02-18 06:57:55.120163 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120181 | orchestrator | 2026-02-18 06:57:55.120199 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 06:57:55.120216 | orchestrator | Wednesday 18 February 2026 06:57:26 +0000 (0:00:00.781) 1:06:15.651 **** 2026-02-18 06:57:55.120233 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120252 | orchestrator | 2026-02-18 06:57:55.120276 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 06:57:55.120296 | orchestrator | Wednesday 18 February 2026 06:57:27 +0000 (0:00:00.838) 1:06:16.489 **** 2026-02-18 06:57:55.120314 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120333 | orchestrator | 2026-02-18 06:57:55.120350 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 06:57:55.120368 | orchestrator | Wednesday 18 February 2026 06:57:28 +0000 (0:00:00.801) 1:06:17.290 **** 2026-02-18 06:57:55.120386 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120404 | orchestrator | 2026-02-18 06:57:55.120422 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 06:57:55.120439 | orchestrator | Wednesday 18 February 2026 06:57:29 +0000 (0:00:00.766) 1:06:18.057 **** 2026-02-18 06:57:55.120451 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120462 | orchestrator | 2026-02-18 06:57:55.120473 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 06:57:55.120483 | orchestrator | Wednesday 18 February 2026 06:57:29 +0000 (0:00:00.767) 1:06:18.824 **** 2026-02-18 06:57:55.120494 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120516 | orchestrator | 2026-02-18 06:57:55.120542 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 06:57:55.120553 | orchestrator | Wednesday 18 February 2026 06:57:30 +0000 (0:00:00.796) 1:06:19.621 **** 2026-02-18 06:57:55.120564 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120574 | orchestrator | 2026-02-18 06:57:55.120586 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 06:57:55.120597 | orchestrator | Wednesday 18 February 2026 06:57:31 +0000 (0:00:00.788) 1:06:20.409 **** 2026-02-18 06:57:55.120648 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120670 | orchestrator | 2026-02-18 06:57:55.120681 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 06:57:55.120692 | orchestrator | Wednesday 18 February 2026 06:57:32 +0000 (0:00:00.815) 1:06:21.226 **** 2026-02-18 06:57:55.120703 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120714 | orchestrator | 2026-02-18 06:57:55.120746 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 06:57:55.120757 | orchestrator | Wednesday 18 February 2026 06:57:33 +0000 (0:00:00.770) 1:06:21.996 **** 2026-02-18 06:57:55.120768 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120779 | orchestrator | 2026-02-18 06:57:55.120790 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 06:57:55.120801 | orchestrator | Wednesday 18 February 2026 06:57:33 +0000 (0:00:00.801) 1:06:22.797 **** 2026-02-18 06:57:55.120812 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120822 | orchestrator | 2026-02-18 06:57:55.120833 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 06:57:55.120844 | orchestrator | Wednesday 18 February 2026 06:57:34 +0000 (0:00:00.778) 1:06:23.576 **** 2026-02-18 06:57:55.120855 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.120871 | orchestrator | 2026-02-18 06:57:55.120890 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 06:57:55.120917 | orchestrator | Wednesday 18 February 2026 06:57:35 +0000 (0:00:00.831) 1:06:24.407 **** 2026-02-18 06:57:55.120991 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.121010 | orchestrator | 2026-02-18 06:57:55.121031 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 06:57:55.121047 | orchestrator | Wednesday 18 February 2026 06:57:37 +0000 (0:00:01.643) 1:06:26.051 **** 2026-02-18 06:57:55.121065 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.121081 | orchestrator | 2026-02-18 06:57:55.121097 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 06:57:55.121114 | orchestrator | Wednesday 18 February 2026 06:57:39 +0000 (0:00:01.973) 1:06:28.025 **** 2026-02-18 06:57:55.121130 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-18 06:57:55.121149 | orchestrator | 2026-02-18 06:57:55.121166 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 06:57:55.121183 | orchestrator | Wednesday 18 February 2026 06:57:40 +0000 (0:00:01.159) 1:06:29.184 **** 2026-02-18 06:57:55.121199 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.121218 | orchestrator | 2026-02-18 06:57:55.121238 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 06:57:55.121257 | orchestrator | Wednesday 18 February 2026 06:57:41 +0000 (0:00:01.172) 1:06:30.357 **** 2026-02-18 06:57:55.121274 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.121289 | orchestrator | 2026-02-18 06:57:55.121300 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 06:57:55.121311 | orchestrator | Wednesday 18 February 2026 06:57:42 +0000 (0:00:01.151) 1:06:31.509 **** 2026-02-18 06:57:55.121322 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 06:57:55.121333 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 06:57:55.121344 | orchestrator | 2026-02-18 06:57:55.121367 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 06:57:55.121378 | orchestrator | Wednesday 18 February 2026 06:57:44 +0000 (0:00:01.904) 1:06:33.414 **** 2026-02-18 06:57:55.121389 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.121400 | orchestrator | 2026-02-18 06:57:55.121411 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 06:57:55.121427 | orchestrator | Wednesday 18 February 2026 06:57:45 +0000 (0:00:01.451) 1:06:34.866 **** 2026-02-18 06:57:55.121453 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.121474 | orchestrator | 2026-02-18 06:57:55.121493 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 06:57:55.121511 | orchestrator | Wednesday 18 February 2026 06:57:47 +0000 (0:00:01.113) 1:06:35.980 **** 2026-02-18 06:57:55.121530 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.121548 | orchestrator | 2026-02-18 06:57:55.121566 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 06:57:55.121585 | orchestrator | Wednesday 18 February 2026 06:57:47 +0000 (0:00:00.864) 1:06:36.844 **** 2026-02-18 06:57:55.121603 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.121621 | orchestrator | 2026-02-18 06:57:55.121638 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 06:57:55.121655 | orchestrator | Wednesday 18 February 2026 06:57:48 +0000 (0:00:00.790) 1:06:37.635 **** 2026-02-18 06:57:55.121673 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-18 06:57:55.121691 | orchestrator | 2026-02-18 06:57:55.121709 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 06:57:55.121726 | orchestrator | Wednesday 18 February 2026 06:57:49 +0000 (0:00:01.152) 1:06:38.788 **** 2026-02-18 06:57:55.121744 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:57:55.121762 | orchestrator | 2026-02-18 06:57:55.121779 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 06:57:55.121798 | orchestrator | Wednesday 18 February 2026 06:57:51 +0000 (0:00:01.703) 1:06:40.492 **** 2026-02-18 06:57:55.121827 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 06:57:55.121846 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 06:57:55.121863 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 06:57:55.121881 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.121899 | orchestrator | 2026-02-18 06:57:55.121917 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 06:57:55.122009 | orchestrator | Wednesday 18 February 2026 06:57:52 +0000 (0:00:01.191) 1:06:41.683 **** 2026-02-18 06:57:55.122116 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.122135 | orchestrator | 2026-02-18 06:57:55.122153 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 06:57:55.122172 | orchestrator | Wednesday 18 February 2026 06:57:53 +0000 (0:00:01.112) 1:06:42.796 **** 2026-02-18 06:57:55.122191 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:57:55.122208 | orchestrator | 2026-02-18 06:57:55.122245 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 06:58:38.649828 | orchestrator | Wednesday 18 February 2026 06:57:55 +0000 (0:00:01.188) 1:06:43.984 **** 2026-02-18 06:58:38.649963 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.649977 | orchestrator | 2026-02-18 06:58:38.649985 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 06:58:38.649992 | orchestrator | Wednesday 18 February 2026 06:57:56 +0000 (0:00:01.211) 1:06:45.196 **** 2026-02-18 06:58:38.649998 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650005 | orchestrator | 2026-02-18 06:58:38.650011 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 06:58:38.650058 | orchestrator | Wednesday 18 February 2026 06:57:57 +0000 (0:00:01.150) 1:06:46.347 **** 2026-02-18 06:58:38.650080 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650084 | orchestrator | 2026-02-18 06:58:38.650088 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 06:58:38.650092 | orchestrator | Wednesday 18 February 2026 06:57:58 +0000 (0:00:00.800) 1:06:47.148 **** 2026-02-18 06:58:38.650096 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:58:38.650100 | orchestrator | 2026-02-18 06:58:38.650104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 06:58:38.650109 | orchestrator | Wednesday 18 February 2026 06:58:00 +0000 (0:00:02.232) 1:06:49.381 **** 2026-02-18 06:58:38.650113 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:58:38.650117 | orchestrator | 2026-02-18 06:58:38.650121 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 06:58:38.650124 | orchestrator | Wednesday 18 February 2026 06:58:01 +0000 (0:00:00.793) 1:06:50.174 **** 2026-02-18 06:58:38.650147 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-18 06:58:38.650152 | orchestrator | 2026-02-18 06:58:38.650156 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 06:58:38.650159 | orchestrator | Wednesday 18 February 2026 06:58:02 +0000 (0:00:01.132) 1:06:51.307 **** 2026-02-18 06:58:38.650163 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650167 | orchestrator | 2026-02-18 06:58:38.650171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 06:58:38.650174 | orchestrator | Wednesday 18 February 2026 06:58:03 +0000 (0:00:01.190) 1:06:52.497 **** 2026-02-18 06:58:38.650178 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650182 | orchestrator | 2026-02-18 06:58:38.650186 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 06:58:38.650189 | orchestrator | Wednesday 18 February 2026 06:58:04 +0000 (0:00:01.166) 1:06:53.664 **** 2026-02-18 06:58:38.650193 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650197 | orchestrator | 2026-02-18 06:58:38.650201 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 06:58:38.650204 | orchestrator | Wednesday 18 February 2026 06:58:05 +0000 (0:00:01.175) 1:06:54.840 **** 2026-02-18 06:58:38.650208 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650212 | orchestrator | 2026-02-18 06:58:38.650216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 06:58:38.650219 | orchestrator | Wednesday 18 February 2026 06:58:07 +0000 (0:00:01.179) 1:06:56.019 **** 2026-02-18 06:58:38.650223 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650227 | orchestrator | 2026-02-18 06:58:38.650231 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 06:58:38.650234 | orchestrator | Wednesday 18 February 2026 06:58:08 +0000 (0:00:01.485) 1:06:57.505 **** 2026-02-18 06:58:38.650238 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650242 | orchestrator | 2026-02-18 06:58:38.650246 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 06:58:38.650250 | orchestrator | Wednesday 18 February 2026 06:58:09 +0000 (0:00:01.248) 1:06:58.753 **** 2026-02-18 06:58:38.650253 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650257 | orchestrator | 2026-02-18 06:58:38.650261 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 06:58:38.650265 | orchestrator | Wednesday 18 February 2026 06:58:11 +0000 (0:00:01.173) 1:06:59.926 **** 2026-02-18 06:58:38.650268 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650272 | orchestrator | 2026-02-18 06:58:38.650276 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 06:58:38.650279 | orchestrator | Wednesday 18 February 2026 06:58:12 +0000 (0:00:01.193) 1:07:01.120 **** 2026-02-18 06:58:38.650283 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:58:38.650287 | orchestrator | 2026-02-18 06:58:38.650291 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 06:58:38.650294 | orchestrator | Wednesday 18 February 2026 06:58:13 +0000 (0:00:00.808) 1:07:01.929 **** 2026-02-18 06:58:38.650302 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-18 06:58:38.650306 | orchestrator | 2026-02-18 06:58:38.650310 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 06:58:38.650314 | orchestrator | Wednesday 18 February 2026 06:58:14 +0000 (0:00:01.162) 1:07:03.092 **** 2026-02-18 06:58:38.650318 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-18 06:58:38.650322 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-18 06:58:38.650326 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-18 06:58:38.650330 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-18 06:58:38.650333 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-18 06:58:38.650337 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-18 06:58:38.650341 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-18 06:58:38.650344 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-18 06:58:38.650349 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 06:58:38.650352 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 06:58:38.650356 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 06:58:38.650371 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 06:58:38.650402 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 06:58:38.650406 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 06:58:38.650411 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-18 06:58:38.650415 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-18 06:58:38.650419 | orchestrator | 2026-02-18 06:58:38.650424 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 06:58:38.650428 | orchestrator | Wednesday 18 February 2026 06:58:20 +0000 (0:00:06.390) 1:07:09.483 **** 2026-02-18 06:58:38.650432 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-18 06:58:38.650437 | orchestrator | 2026-02-18 06:58:38.650441 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 06:58:38.650445 | orchestrator | Wednesday 18 February 2026 06:58:21 +0000 (0:00:01.216) 1:07:10.699 **** 2026-02-18 06:58:38.650450 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:58:38.650455 | orchestrator | 2026-02-18 06:58:38.650459 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 06:58:38.650464 | orchestrator | Wednesday 18 February 2026 06:58:23 +0000 (0:00:01.472) 1:07:12.172 **** 2026-02-18 06:58:38.650468 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:58:38.650472 | orchestrator | 2026-02-18 06:58:38.650476 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 06:58:38.650481 | orchestrator | Wednesday 18 February 2026 06:58:24 +0000 (0:00:01.614) 1:07:13.786 **** 2026-02-18 06:58:38.650485 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650490 | orchestrator | 2026-02-18 06:58:38.650494 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 06:58:38.650498 | orchestrator | Wednesday 18 February 2026 06:58:25 +0000 (0:00:00.868) 1:07:14.655 **** 2026-02-18 06:58:38.650502 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650507 | orchestrator | 2026-02-18 06:58:38.650511 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 06:58:38.650515 | orchestrator | Wednesday 18 February 2026 06:58:26 +0000 (0:00:00.795) 1:07:15.451 **** 2026-02-18 06:58:38.650520 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650528 | orchestrator | 2026-02-18 06:58:38.650533 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 06:58:38.650537 | orchestrator | Wednesday 18 February 2026 06:58:27 +0000 (0:00:00.797) 1:07:16.248 **** 2026-02-18 06:58:38.650541 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650545 | orchestrator | 2026-02-18 06:58:38.650550 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 06:58:38.650554 | orchestrator | Wednesday 18 February 2026 06:58:28 +0000 (0:00:00.843) 1:07:17.092 **** 2026-02-18 06:58:38.650558 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650562 | orchestrator | 2026-02-18 06:58:38.650567 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 06:58:38.650571 | orchestrator | Wednesday 18 February 2026 06:58:28 +0000 (0:00:00.781) 1:07:17.874 **** 2026-02-18 06:58:38.650575 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650580 | orchestrator | 2026-02-18 06:58:38.650584 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 06:58:38.650588 | orchestrator | Wednesday 18 February 2026 06:58:29 +0000 (0:00:00.798) 1:07:18.672 **** 2026-02-18 06:58:38.650592 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650597 | orchestrator | 2026-02-18 06:58:38.650601 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 06:58:38.650605 | orchestrator | Wednesday 18 February 2026 06:58:30 +0000 (0:00:00.769) 1:07:19.442 **** 2026-02-18 06:58:38.650609 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650613 | orchestrator | 2026-02-18 06:58:38.650618 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 06:58:38.650622 | orchestrator | Wednesday 18 February 2026 06:58:31 +0000 (0:00:00.775) 1:07:20.217 **** 2026-02-18 06:58:38.650626 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650630 | orchestrator | 2026-02-18 06:58:38.650635 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 06:58:38.650639 | orchestrator | Wednesday 18 February 2026 06:58:32 +0000 (0:00:00.789) 1:07:21.006 **** 2026-02-18 06:58:38.650643 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650647 | orchestrator | 2026-02-18 06:58:38.650652 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 06:58:38.650659 | orchestrator | Wednesday 18 February 2026 06:58:32 +0000 (0:00:00.811) 1:07:21.818 **** 2026-02-18 06:58:38.650663 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:58:38.650667 | orchestrator | 2026-02-18 06:58:38.650672 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 06:58:38.650676 | orchestrator | Wednesday 18 February 2026 06:58:33 +0000 (0:00:00.798) 1:07:22.617 **** 2026-02-18 06:58:38.650680 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-18 06:58:38.650685 | orchestrator | 2026-02-18 06:58:38.650689 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 06:58:38.650693 | orchestrator | Wednesday 18 February 2026 06:58:37 +0000 (0:00:03.993) 1:07:26.611 **** 2026-02-18 06:58:38.650697 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:58:38.650702 | orchestrator | 2026-02-18 06:58:38.650709 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 06:59:20.213563 | orchestrator | Wednesday 18 February 2026 06:58:38 +0000 (0:00:00.901) 1:07:27.513 **** 2026-02-18 06:59:20.213691 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-18 06:59:20.213721 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-18 06:59:20.213775 | orchestrator | 2026-02-18 06:59:20.213797 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 06:59:20.213815 | orchestrator | Wednesday 18 February 2026 06:58:43 +0000 (0:00:04.760) 1:07:32.274 **** 2026-02-18 06:59:20.213834 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.213853 | orchestrator | 2026-02-18 06:59:20.213873 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 06:59:20.213952 | orchestrator | Wednesday 18 February 2026 06:58:44 +0000 (0:00:00.856) 1:07:33.130 **** 2026-02-18 06:59:20.213966 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.213977 | orchestrator | 2026-02-18 06:59:20.213991 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 06:59:20.214080 | orchestrator | Wednesday 18 February 2026 06:58:45 +0000 (0:00:00.791) 1:07:33.922 **** 2026-02-18 06:59:20.214106 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.214124 | orchestrator | 2026-02-18 06:59:20.214142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 06:59:20.214160 | orchestrator | Wednesday 18 February 2026 06:58:45 +0000 (0:00:00.833) 1:07:34.756 **** 2026-02-18 06:59:20.214178 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.214197 | orchestrator | 2026-02-18 06:59:20.214215 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 06:59:20.214235 | orchestrator | Wednesday 18 February 2026 06:58:46 +0000 (0:00:00.802) 1:07:35.559 **** 2026-02-18 06:59:20.214254 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.214273 | orchestrator | 2026-02-18 06:59:20.214292 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 06:59:20.214312 | orchestrator | Wednesday 18 February 2026 06:58:47 +0000 (0:00:00.810) 1:07:36.370 **** 2026-02-18 06:59:20.214331 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:59:20.214351 | orchestrator | 2026-02-18 06:59:20.214370 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 06:59:20.214389 | orchestrator | Wednesday 18 February 2026 06:58:48 +0000 (0:00:00.933) 1:07:37.303 **** 2026-02-18 06:59:20.214407 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:59:20.214423 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:59:20.214440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:59:20.214458 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.214474 | orchestrator | 2026-02-18 06:59:20.214490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 06:59:20.214507 | orchestrator | Wednesday 18 February 2026 06:58:49 +0000 (0:00:01.117) 1:07:38.421 **** 2026-02-18 06:59:20.214524 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:59:20.214542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:59:20.214562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:59:20.214580 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.214597 | orchestrator | 2026-02-18 06:59:20.214617 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 06:59:20.214637 | orchestrator | Wednesday 18 February 2026 06:58:50 +0000 (0:00:01.151) 1:07:39.573 **** 2026-02-18 06:59:20.214654 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-18 06:59:20.214672 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-18 06:59:20.214689 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-18 06:59:20.214708 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.214725 | orchestrator | 2026-02-18 06:59:20.214743 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 06:59:20.214784 | orchestrator | Wednesday 18 February 2026 06:58:51 +0000 (0:00:01.116) 1:07:40.690 **** 2026-02-18 06:59:20.214819 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:59:20.214836 | orchestrator | 2026-02-18 06:59:20.214854 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 06:59:20.214872 | orchestrator | Wednesday 18 February 2026 06:58:52 +0000 (0:00:00.814) 1:07:41.505 **** 2026-02-18 06:59:20.214935 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-18 06:59:20.214957 | orchestrator | 2026-02-18 06:59:20.214976 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 06:59:20.214994 | orchestrator | Wednesday 18 February 2026 06:58:53 +0000 (0:00:01.001) 1:07:42.507 **** 2026-02-18 06:59:20.215013 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:59:20.215031 | orchestrator | 2026-02-18 06:59:20.215048 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-18 06:59:20.215066 | orchestrator | Wednesday 18 February 2026 06:58:55 +0000 (0:00:02.054) 1:07:44.562 **** 2026-02-18 06:59:20.215083 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-18 06:59:20.215102 | orchestrator | 2026-02-18 06:59:20.215151 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 06:59:20.215170 | orchestrator | Wednesday 18 February 2026 06:58:56 +0000 (0:00:01.111) 1:07:45.673 **** 2026-02-18 06:59:20.215189 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:59:20.215202 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 06:59:20.215214 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:59:20.215227 | orchestrator | 2026-02-18 06:59:20.215239 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:59:20.215251 | orchestrator | Wednesday 18 February 2026 06:59:00 +0000 (0:00:03.288) 1:07:48.962 **** 2026-02-18 06:59:20.215263 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-18 06:59:20.215276 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-18 06:59:20.215288 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:59:20.215300 | orchestrator | 2026-02-18 06:59:20.215329 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-18 06:59:20.215343 | orchestrator | Wednesday 18 February 2026 06:59:02 +0000 (0:00:02.044) 1:07:51.007 **** 2026-02-18 06:59:20.215366 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.215379 | orchestrator | 2026-02-18 06:59:20.215392 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-18 06:59:20.215404 | orchestrator | Wednesday 18 February 2026 06:59:02 +0000 (0:00:00.777) 1:07:51.784 **** 2026-02-18 06:59:20.215417 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-18 06:59:20.215431 | orchestrator | 2026-02-18 06:59:20.215443 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-18 06:59:20.215456 | orchestrator | Wednesday 18 February 2026 06:59:04 +0000 (0:00:01.107) 1:07:52.892 **** 2026-02-18 06:59:20.215469 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 06:59:20.215483 | orchestrator | 2026-02-18 06:59:20.215496 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-18 06:59:20.215508 | orchestrator | Wednesday 18 February 2026 06:59:05 +0000 (0:00:01.628) 1:07:54.521 **** 2026-02-18 06:59:20.215520 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:59:20.215534 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-18 06:59:20.215547 | orchestrator | 2026-02-18 06:59:20.215559 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 06:59:20.215570 | orchestrator | Wednesday 18 February 2026 06:59:10 +0000 (0:00:05.050) 1:07:59.571 **** 2026-02-18 06:59:20.215594 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 06:59:20.215605 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 06:59:20.215616 | orchestrator | 2026-02-18 06:59:20.215630 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 06:59:20.215649 | orchestrator | Wednesday 18 February 2026 06:59:13 +0000 (0:00:03.154) 1:08:02.726 **** 2026-02-18 06:59:20.215668 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-18 06:59:20.215685 | orchestrator | ok: [testbed-node-4] 2026-02-18 06:59:20.215704 | orchestrator | 2026-02-18 06:59:20.215722 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-18 06:59:20.215742 | orchestrator | Wednesday 18 February 2026 06:59:15 +0000 (0:00:01.654) 1:08:04.380 **** 2026-02-18 06:59:20.215759 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-18 06:59:20.215778 | orchestrator | 2026-02-18 06:59:20.215798 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-18 06:59:20.215817 | orchestrator | Wednesday 18 February 2026 06:59:16 +0000 (0:00:01.346) 1:08:05.726 **** 2026-02-18 06:59:20.215835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.215846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.215858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.215869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.215914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.215928 | orchestrator | skipping: [testbed-node-4] 2026-02-18 06:59:20.215939 | orchestrator | 2026-02-18 06:59:20.215949 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-18 06:59:20.215960 | orchestrator | Wednesday 18 February 2026 06:59:18 +0000 (0:00:01.633) 1:08:07.360 **** 2026-02-18 06:59:20.215971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.215982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.215993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 06:59:20.216015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:00:26.295590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:00:26.295735 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:00:26.295755 | orchestrator | 2026-02-18 07:00:26.295770 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-18 07:00:26.295789 | orchestrator | Wednesday 18 February 2026 06:59:20 +0000 (0:00:01.716) 1:08:09.076 **** 2026-02-18 07:00:26.295807 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:00:26.295821 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:00:26.295832 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:00:26.295917 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:00:26.295931 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:00:26.295942 | orchestrator | 2026-02-18 07:00:26.295954 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-18 07:00:26.295965 | orchestrator | Wednesday 18 February 2026 06:59:50 +0000 (0:00:30.785) 1:08:39.862 **** 2026-02-18 07:00:26.295976 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:00:26.295987 | orchestrator | 2026-02-18 07:00:26.295998 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-18 07:00:26.296009 | orchestrator | Wednesday 18 February 2026 06:59:51 +0000 (0:00:00.781) 1:08:40.643 **** 2026-02-18 07:00:26.296020 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:00:26.296031 | orchestrator | 2026-02-18 07:00:26.296042 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-18 07:00:26.296053 | orchestrator | Wednesday 18 February 2026 06:59:52 +0000 (0:00:00.796) 1:08:41.440 **** 2026-02-18 07:00:26.296064 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-18 07:00:26.296075 | orchestrator | 2026-02-18 07:00:26.296086 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-18 07:00:26.296097 | orchestrator | Wednesday 18 February 2026 06:59:53 +0000 (0:00:01.110) 1:08:42.550 **** 2026-02-18 07:00:26.296108 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-18 07:00:26.296121 | orchestrator | 2026-02-18 07:00:26.296134 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-18 07:00:26.296148 | orchestrator | Wednesday 18 February 2026 06:59:54 +0000 (0:00:01.112) 1:08:43.663 **** 2026-02-18 07:00:26.296160 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:00:26.296173 | orchestrator | 2026-02-18 07:00:26.296186 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-18 07:00:26.296200 | orchestrator | Wednesday 18 February 2026 06:59:56 +0000 (0:00:02.093) 1:08:45.757 **** 2026-02-18 07:00:26.296213 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:00:26.296298 | orchestrator | 2026-02-18 07:00:26.296313 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-18 07:00:26.296324 | orchestrator | Wednesday 18 February 2026 06:59:58 +0000 (0:00:02.039) 1:08:47.796 **** 2026-02-18 07:00:26.296335 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:00:26.296346 | orchestrator | 2026-02-18 07:00:26.296357 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-18 07:00:26.296367 | orchestrator | Wednesday 18 February 2026 07:00:01 +0000 (0:00:02.256) 1:08:50.053 **** 2026-02-18 07:00:26.296378 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-18 07:00:26.296389 | orchestrator | 2026-02-18 07:00:26.296400 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-18 07:00:26.296411 | orchestrator | 2026-02-18 07:00:26.296422 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 07:00:26.296433 | orchestrator | Wednesday 18 February 2026 07:00:04 +0000 (0:00:03.297) 1:08:53.351 **** 2026-02-18 07:00:26.296443 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-18 07:00:26.296454 | orchestrator | 2026-02-18 07:00:26.296479 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-18 07:00:26.296491 | orchestrator | Wednesday 18 February 2026 07:00:05 +0000 (0:00:01.224) 1:08:54.575 **** 2026-02-18 07:00:26.296502 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296512 | orchestrator | 2026-02-18 07:00:26.296523 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-18 07:00:26.296543 | orchestrator | Wednesday 18 February 2026 07:00:07 +0000 (0:00:01.488) 1:08:56.064 **** 2026-02-18 07:00:26.296555 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296566 | orchestrator | 2026-02-18 07:00:26.296577 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 07:00:26.296587 | orchestrator | Wednesday 18 February 2026 07:00:08 +0000 (0:00:01.221) 1:08:57.286 **** 2026-02-18 07:00:26.296598 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296609 | orchestrator | 2026-02-18 07:00:26.296620 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 07:00:26.296631 | orchestrator | Wednesday 18 February 2026 07:00:09 +0000 (0:00:01.502) 1:08:58.788 **** 2026-02-18 07:00:26.296641 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296652 | orchestrator | 2026-02-18 07:00:26.296682 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-18 07:00:26.296694 | orchestrator | Wednesday 18 February 2026 07:00:11 +0000 (0:00:01.130) 1:08:59.919 **** 2026-02-18 07:00:26.296705 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296716 | orchestrator | 2026-02-18 07:00:26.296726 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-18 07:00:26.296737 | orchestrator | Wednesday 18 February 2026 07:00:12 +0000 (0:00:01.157) 1:09:01.077 **** 2026-02-18 07:00:26.296748 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296759 | orchestrator | 2026-02-18 07:00:26.296770 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-18 07:00:26.296781 | orchestrator | Wednesday 18 February 2026 07:00:13 +0000 (0:00:01.147) 1:09:02.225 **** 2026-02-18 07:00:26.296792 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:26.296803 | orchestrator | 2026-02-18 07:00:26.296814 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-18 07:00:26.296824 | orchestrator | Wednesday 18 February 2026 07:00:14 +0000 (0:00:01.159) 1:09:03.384 **** 2026-02-18 07:00:26.296835 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296846 | orchestrator | 2026-02-18 07:00:26.296873 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-18 07:00:26.296885 | orchestrator | Wednesday 18 February 2026 07:00:15 +0000 (0:00:01.238) 1:09:04.623 **** 2026-02-18 07:00:26.296895 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 07:00:26.296906 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 07:00:26.296917 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 07:00:26.296928 | orchestrator | 2026-02-18 07:00:26.296939 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-18 07:00:26.296950 | orchestrator | Wednesday 18 February 2026 07:00:17 +0000 (0:00:01.783) 1:09:06.406 **** 2026-02-18 07:00:26.296961 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:26.296972 | orchestrator | 2026-02-18 07:00:26.296983 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-18 07:00:26.296994 | orchestrator | Wednesday 18 February 2026 07:00:18 +0000 (0:00:01.278) 1:09:07.685 **** 2026-02-18 07:00:26.297005 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 07:00:26.297016 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 07:00:26.297027 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 07:00:26.297037 | orchestrator | 2026-02-18 07:00:26.297048 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-18 07:00:26.297059 | orchestrator | Wednesday 18 February 2026 07:00:21 +0000 (0:00:03.022) 1:09:10.708 **** 2026-02-18 07:00:26.297070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 07:00:26.297081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 07:00:26.297092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 07:00:26.297111 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:26.297121 | orchestrator | 2026-02-18 07:00:26.297132 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-18 07:00:26.297143 | orchestrator | Wednesday 18 February 2026 07:00:23 +0000 (0:00:01.525) 1:09:12.233 **** 2026-02-18 07:00:26.297155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-18 07:00:26.297170 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-18 07:00:26.297181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-18 07:00:26.297192 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:26.297203 | orchestrator | 2026-02-18 07:00:26.297214 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-18 07:00:26.297230 | orchestrator | Wednesday 18 February 2026 07:00:25 +0000 (0:00:01.716) 1:09:13.950 **** 2026-02-18 07:00:26.297244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:26.297265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:45.675745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:45.675895 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.675914 | orchestrator | 2026-02-18 07:00:45.675927 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-18 07:00:45.675940 | orchestrator | Wednesday 18 February 2026 07:00:26 +0000 (0:00:01.206) 1:09:15.157 **** 2026-02-18 07:00:45.675953 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '07dd2330a089', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-18 07:00:19.393392', 'end': '2026-02-18 07:00:19.439624', 'delta': '0:00:00.046232', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07dd2330a089'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-18 07:00:45.675968 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '1f56f83084c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-18 07:00:19.999322', 'end': '2026-02-18 07:00:20.051992', 'delta': '0:00:00.052670', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f56f83084c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-18 07:00:45.676004 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '3d2b8f6fff5a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-18 07:00:20.582417', 'end': '2026-02-18 07:00:20.628330', 'delta': '0:00:00.045913', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d2b8f6fff5a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-18 07:00:45.676016 | orchestrator | 2026-02-18 07:00:45.676027 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-18 07:00:45.676038 | orchestrator | Wednesday 18 February 2026 07:00:27 +0000 (0:00:01.206) 1:09:16.363 **** 2026-02-18 07:00:45.676049 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:45.676060 | orchestrator | 2026-02-18 07:00:45.676071 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-18 07:00:45.676097 | orchestrator | Wednesday 18 February 2026 07:00:28 +0000 (0:00:01.259) 1:09:17.623 **** 2026-02-18 07:00:45.676108 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.676119 | orchestrator | 2026-02-18 07:00:45.676130 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-18 07:00:45.676140 | orchestrator | Wednesday 18 February 2026 07:00:30 +0000 (0:00:01.323) 1:09:18.947 **** 2026-02-18 07:00:45.676151 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:45.676162 | orchestrator | 2026-02-18 07:00:45.676172 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-18 07:00:45.676183 | orchestrator | Wednesday 18 February 2026 07:00:31 +0000 (0:00:01.185) 1:09:20.132 **** 2026-02-18 07:00:45.676194 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-18 07:00:45.676205 | orchestrator | 2026-02-18 07:00:45.676216 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 07:00:45.676226 | orchestrator | Wednesday 18 February 2026 07:00:33 +0000 (0:00:02.037) 1:09:22.170 **** 2026-02-18 07:00:45.676237 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:45.676248 | orchestrator | 2026-02-18 07:00:45.676258 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-18 07:00:45.676271 | orchestrator | Wednesday 18 February 2026 07:00:34 +0000 (0:00:01.134) 1:09:23.305 **** 2026-02-18 07:00:45.676302 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.676316 | orchestrator | 2026-02-18 07:00:45.676328 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-18 07:00:45.676341 | orchestrator | Wednesday 18 February 2026 07:00:35 +0000 (0:00:01.161) 1:09:24.466 **** 2026-02-18 07:00:45.676353 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.676365 | orchestrator | 2026-02-18 07:00:45.676377 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-18 07:00:45.676389 | orchestrator | Wednesday 18 February 2026 07:00:37 +0000 (0:00:01.694) 1:09:26.161 **** 2026-02-18 07:00:45.676401 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.676414 | orchestrator | 2026-02-18 07:00:45.676426 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-18 07:00:45.676448 | orchestrator | Wednesday 18 February 2026 07:00:38 +0000 (0:00:01.199) 1:09:27.360 **** 2026-02-18 07:00:45.676460 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.676473 | orchestrator | 2026-02-18 07:00:45.676485 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-18 07:00:45.676498 | orchestrator | Wednesday 18 February 2026 07:00:39 +0000 (0:00:01.128) 1:09:28.488 **** 2026-02-18 07:00:45.676510 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:45.676522 | orchestrator | 2026-02-18 07:00:45.676534 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-18 07:00:45.676546 | orchestrator | Wednesday 18 February 2026 07:00:40 +0000 (0:00:01.225) 1:09:29.714 **** 2026-02-18 07:00:45.676558 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.676569 | orchestrator | 2026-02-18 07:00:45.676581 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-18 07:00:45.676593 | orchestrator | Wednesday 18 February 2026 07:00:41 +0000 (0:00:01.095) 1:09:30.809 **** 2026-02-18 07:00:45.676605 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:45.676618 | orchestrator | 2026-02-18 07:00:45.676631 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-18 07:00:45.676643 | orchestrator | Wednesday 18 February 2026 07:00:43 +0000 (0:00:01.161) 1:09:31.971 **** 2026-02-18 07:00:45.676654 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:45.676665 | orchestrator | 2026-02-18 07:00:45.676676 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-18 07:00:45.676687 | orchestrator | Wednesday 18 February 2026 07:00:44 +0000 (0:00:01.133) 1:09:33.104 **** 2026-02-18 07:00:45.676698 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:00:45.676708 | orchestrator | 2026-02-18 07:00:45.676719 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-18 07:00:45.676729 | orchestrator | Wednesday 18 February 2026 07:00:45 +0000 (0:00:01.180) 1:09:34.285 **** 2026-02-18 07:00:45.676741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:45.676753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}})  2026-02-18 07:00:45.676772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 07:00:45.676794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}})  2026-02-18 07:00:46.780136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-18 07:00:46.780272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}})  2026-02-18 07:00:46.780379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}})  2026-02-18 07:00:46.780392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-18 07:00:46.780426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-18 07:00:46.780465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-18 07:00:47.010258 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:00:47.010361 | orchestrator | 2026-02-18 07:00:47.010377 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-18 07:00:47.010391 | orchestrator | Wednesday 18 February 2026 07:00:46 +0000 (0:00:01.364) 1:09:35.649 **** 2026-02-18 07:00:47.010405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72', 'dm-uuid-LVM-UGdTSq9PdCT28O1nGgJMQZmacDTaVquQlOxcAHsNv5oyHWylEX6fq2HGAiOWhUXB'], 'uuids': ['95905d4e-bf83-4096-8e9b-20c58ade16b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB']}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d', 'scsi-SQEMU_QEMU_HARDDISK_b5427a30-1286-46b2-89f3-e63c343feb5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5427a30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-V5vSKL-De4c-kQ6l-RjTy-GBmF-DjHN-jh3DfZ', 'scsi-0QEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322', 'scsi-SQEMU_QEMU_HARDDISK_b30dbb74-62b1-4c30-bd5a-d0d123586322'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3']}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-18-02-27-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010559 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur', 'dm-uuid-CRYPT-LUKS2-8cf9dc351f244d02b853cca8cfa45a9c-2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010596 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:00:47.010615 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b4fe298a--487d--5630--bf9a--8376c13eb8c3-osd--block--b4fe298a--487d--5630--bf9a--8376c13eb8c3', 'dm-uuid-LVM-CcC7lw8YgnkHJGdsGYlrE376lA1N94Ls2KtCWRb5kdVA9bliPjEKVnRGGhe4QGur'], 'uuids': ['8cf9dc35-1f24-4d02-b853-cca8cfa45a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30dbb74', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2KtCWR-b5kd-VA9b-liPj-EKVn-RGGh-e4QGur']}}, 'ansible_loop_var': 'item'})  2026-02-18 07:01:00.366300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-vMCTnV-HoWA-SOlh-C0RX-hAjh-gDU0-9dYFco', 'scsi-0QEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d', 'scsi-SQEMU_QEMU_HARDDISK_136ad752-18af-4e59-8421-509e0a1d154d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '136ad752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72-osd--block--a3fa5e2b--5aa1--58af--bddd--1734a40d2e72']}}, 'ansible_loop_var': 'item'})  2026-02-18 07:01:00.366381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:01:00.366399 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5e163393', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e163393-99f4-4b13-b667-4f0af745a039-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:01:00.366430 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:01:00.366436 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:01:00.366441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB', 'dm-uuid-CRYPT-LUKS2-95905d4ebf8340968e9b20c58ade16b8-lOxcAH-sNv5-oyHW-ylEX-6fq2-HGAi-OWhUXB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-18 07:01:00.366446 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:00.366451 | orchestrator | 2026-02-18 07:01:00.366456 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-18 07:01:00.366462 | orchestrator | Wednesday 18 February 2026 07:00:48 +0000 (0:00:01.443) 1:09:37.093 **** 2026-02-18 07:01:00.366470 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:00.366475 | orchestrator | 2026-02-18 07:01:00.366479 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-18 07:01:00.366483 | orchestrator | Wednesday 18 February 2026 07:00:49 +0000 (0:00:01.547) 1:09:38.640 **** 2026-02-18 07:01:00.366486 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:00.366490 | orchestrator | 2026-02-18 07:01:00.366494 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 07:01:00.366498 | orchestrator | Wednesday 18 February 2026 07:00:50 +0000 (0:00:01.136) 1:09:39.776 **** 2026-02-18 07:01:00.366502 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:00.366505 | orchestrator | 2026-02-18 07:01:00.366509 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 07:01:00.366513 | orchestrator | Wednesday 18 February 2026 07:00:52 +0000 (0:00:01.559) 1:09:41.336 **** 2026-02-18 07:01:00.366520 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:00.366524 | orchestrator | 2026-02-18 07:01:00.366527 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-18 07:01:00.366531 | orchestrator | Wednesday 18 February 2026 07:00:53 +0000 (0:00:01.261) 1:09:42.597 **** 2026-02-18 07:01:00.366535 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:00.366539 | orchestrator | 2026-02-18 07:01:00.366543 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-18 07:01:00.366546 | orchestrator | Wednesday 18 February 2026 07:00:55 +0000 (0:00:01.286) 1:09:43.884 **** 2026-02-18 07:01:00.366550 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:00.366554 | orchestrator | 2026-02-18 07:01:00.366558 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-18 07:01:00.366562 | orchestrator | Wednesday 18 February 2026 07:00:56 +0000 (0:00:01.203) 1:09:45.088 **** 2026-02-18 07:01:00.366566 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-18 07:01:00.366570 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-18 07:01:00.366573 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-18 07:01:00.366577 | orchestrator | 2026-02-18 07:01:00.366581 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-18 07:01:00.366585 | orchestrator | Wednesday 18 February 2026 07:00:57 +0000 (0:00:01.761) 1:09:46.850 **** 2026-02-18 07:01:00.366589 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-18 07:01:00.366593 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-18 07:01:00.366597 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-18 07:01:00.366601 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:00.366605 | orchestrator | 2026-02-18 07:01:00.366609 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-18 07:01:00.366612 | orchestrator | Wednesday 18 February 2026 07:00:59 +0000 (0:00:01.197) 1:09:48.047 **** 2026-02-18 07:01:00.366616 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-18 07:01:00.366621 | orchestrator | 2026-02-18 07:01:00.366628 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 07:01:43.502667 | orchestrator | Wednesday 18 February 2026 07:01:00 +0000 (0:00:01.179) 1:09:49.227 **** 2026-02-18 07:01:43.502784 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.502800 | orchestrator | 2026-02-18 07:01:43.502813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 07:01:43.502882 | orchestrator | Wednesday 18 February 2026 07:01:01 +0000 (0:00:01.137) 1:09:50.365 **** 2026-02-18 07:01:43.502895 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.502906 | orchestrator | 2026-02-18 07:01:43.502918 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 07:01:43.502929 | orchestrator | Wednesday 18 February 2026 07:01:02 +0000 (0:00:01.160) 1:09:51.526 **** 2026-02-18 07:01:43.502940 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.502977 | orchestrator | 2026-02-18 07:01:43.502988 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 07:01:43.502999 | orchestrator | Wednesday 18 February 2026 07:01:03 +0000 (0:00:01.144) 1:09:52.670 **** 2026-02-18 07:01:43.503010 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.503022 | orchestrator | 2026-02-18 07:01:43.503033 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 07:01:43.503043 | orchestrator | Wednesday 18 February 2026 07:01:05 +0000 (0:00:01.251) 1:09:53.921 **** 2026-02-18 07:01:43.503054 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 07:01:43.503065 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 07:01:43.503076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 07:01:43.503086 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.503097 | orchestrator | 2026-02-18 07:01:43.503108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 07:01:43.503119 | orchestrator | Wednesday 18 February 2026 07:01:06 +0000 (0:00:01.815) 1:09:55.737 **** 2026-02-18 07:01:43.503129 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 07:01:43.503140 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 07:01:43.503151 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 07:01:43.503161 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.503172 | orchestrator | 2026-02-18 07:01:43.503183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 07:01:43.503194 | orchestrator | Wednesday 18 February 2026 07:01:08 +0000 (0:00:01.766) 1:09:57.503 **** 2026-02-18 07:01:43.503206 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 07:01:43.503219 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 07:01:43.503232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 07:01:43.503244 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.503256 | orchestrator | 2026-02-18 07:01:43.503268 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 07:01:43.503280 | orchestrator | Wednesday 18 February 2026 07:01:10 +0000 (0:00:01.886) 1:09:59.390 **** 2026-02-18 07:01:43.503293 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.503306 | orchestrator | 2026-02-18 07:01:43.503318 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 07:01:43.503331 | orchestrator | Wednesday 18 February 2026 07:01:11 +0000 (0:00:01.149) 1:10:00.539 **** 2026-02-18 07:01:43.503344 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 07:01:43.503356 | orchestrator | 2026-02-18 07:01:43.503369 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-18 07:01:43.503382 | orchestrator | Wednesday 18 February 2026 07:01:13 +0000 (0:00:01.372) 1:10:01.912 **** 2026-02-18 07:01:43.503409 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 07:01:43.503423 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 07:01:43.503436 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 07:01:43.503449 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 07:01:43.503461 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 07:01:43.503473 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-18 07:01:43.503485 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 07:01:43.503498 | orchestrator | 2026-02-18 07:01:43.503510 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-18 07:01:43.503522 | orchestrator | Wednesday 18 February 2026 07:01:14 +0000 (0:00:01.844) 1:10:03.756 **** 2026-02-18 07:01:43.503534 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-18 07:01:43.503554 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-18 07:01:43.503567 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-18 07:01:43.503579 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-18 07:01:43.503591 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-18 07:01:43.503602 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-18 07:01:43.503612 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-18 07:01:43.503623 | orchestrator | 2026-02-18 07:01:43.503633 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-18 07:01:43.503644 | orchestrator | Wednesday 18 February 2026 07:01:17 +0000 (0:00:02.395) 1:10:06.152 **** 2026-02-18 07:01:43.503655 | orchestrator | changed: [testbed-node-5] 2026-02-18 07:01:43.503666 | orchestrator | 2026-02-18 07:01:43.503694 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-18 07:01:43.503705 | orchestrator | Wednesday 18 February 2026 07:01:19 +0000 (0:00:01.880) 1:10:08.032 **** 2026-02-18 07:01:43.503716 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 07:01:43.503728 | orchestrator | 2026-02-18 07:01:43.503739 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-18 07:01:43.503749 | orchestrator | Wednesday 18 February 2026 07:01:21 +0000 (0:00:02.581) 1:10:10.614 **** 2026-02-18 07:01:43.503760 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 07:01:43.503772 | orchestrator | 2026-02-18 07:01:43.503783 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 07:01:43.503795 | orchestrator | Wednesday 18 February 2026 07:01:23 +0000 (0:00:01.936) 1:10:12.550 **** 2026-02-18 07:01:43.503814 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-18 07:01:43.503859 | orchestrator | 2026-02-18 07:01:43.503878 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 07:01:43.503897 | orchestrator | Wednesday 18 February 2026 07:01:24 +0000 (0:00:01.139) 1:10:13.690 **** 2026-02-18 07:01:43.503914 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-18 07:01:43.503934 | orchestrator | 2026-02-18 07:01:43.503946 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 07:01:43.503956 | orchestrator | Wednesday 18 February 2026 07:01:25 +0000 (0:00:01.145) 1:10:14.836 **** 2026-02-18 07:01:43.503967 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.503978 | orchestrator | 2026-02-18 07:01:43.503988 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 07:01:43.503999 | orchestrator | Wednesday 18 February 2026 07:01:27 +0000 (0:00:01.216) 1:10:16.052 **** 2026-02-18 07:01:43.504010 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504021 | orchestrator | 2026-02-18 07:01:43.504031 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 07:01:43.504042 | orchestrator | Wednesday 18 February 2026 07:01:28 +0000 (0:00:01.535) 1:10:17.588 **** 2026-02-18 07:01:43.504053 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504063 | orchestrator | 2026-02-18 07:01:43.504074 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 07:01:43.504085 | orchestrator | Wednesday 18 February 2026 07:01:30 +0000 (0:00:01.673) 1:10:19.262 **** 2026-02-18 07:01:43.504096 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504106 | orchestrator | 2026-02-18 07:01:43.504117 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 07:01:43.504128 | orchestrator | Wednesday 18 February 2026 07:01:31 +0000 (0:00:01.552) 1:10:20.815 **** 2026-02-18 07:01:43.504147 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.504158 | orchestrator | 2026-02-18 07:01:43.504169 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 07:01:43.504180 | orchestrator | Wednesday 18 February 2026 07:01:33 +0000 (0:00:01.137) 1:10:21.952 **** 2026-02-18 07:01:43.504190 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.504201 | orchestrator | 2026-02-18 07:01:43.504212 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 07:01:43.504223 | orchestrator | Wednesday 18 February 2026 07:01:34 +0000 (0:00:01.168) 1:10:23.121 **** 2026-02-18 07:01:43.504233 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.504244 | orchestrator | 2026-02-18 07:01:43.504255 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 07:01:43.504272 | orchestrator | Wednesday 18 February 2026 07:01:35 +0000 (0:00:01.147) 1:10:24.268 **** 2026-02-18 07:01:43.504283 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504294 | orchestrator | 2026-02-18 07:01:43.504305 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 07:01:43.504315 | orchestrator | Wednesday 18 February 2026 07:01:36 +0000 (0:00:01.568) 1:10:25.836 **** 2026-02-18 07:01:43.504326 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504337 | orchestrator | 2026-02-18 07:01:43.504347 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 07:01:43.504358 | orchestrator | Wednesday 18 February 2026 07:01:38 +0000 (0:00:01.544) 1:10:27.381 **** 2026-02-18 07:01:43.504369 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.504380 | orchestrator | 2026-02-18 07:01:43.504390 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 07:01:43.504401 | orchestrator | Wednesday 18 February 2026 07:01:39 +0000 (0:00:00.841) 1:10:28.222 **** 2026-02-18 07:01:43.504412 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.504423 | orchestrator | 2026-02-18 07:01:43.504434 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 07:01:43.504444 | orchestrator | Wednesday 18 February 2026 07:01:40 +0000 (0:00:00.831) 1:10:29.054 **** 2026-02-18 07:01:43.504455 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504466 | orchestrator | 2026-02-18 07:01:43.504476 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 07:01:43.504487 | orchestrator | Wednesday 18 February 2026 07:01:40 +0000 (0:00:00.818) 1:10:29.873 **** 2026-02-18 07:01:43.504498 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504508 | orchestrator | 2026-02-18 07:01:43.504519 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 07:01:43.504530 | orchestrator | Wednesday 18 February 2026 07:01:41 +0000 (0:00:00.818) 1:10:30.691 **** 2026-02-18 07:01:43.504540 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:01:43.504551 | orchestrator | 2026-02-18 07:01:43.504562 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 07:01:43.504572 | orchestrator | Wednesday 18 February 2026 07:01:42 +0000 (0:00:00.894) 1:10:31.585 **** 2026-02-18 07:01:43.504583 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:01:43.504594 | orchestrator | 2026-02-18 07:01:43.504612 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 07:02:24.346771 | orchestrator | Wednesday 18 February 2026 07:01:43 +0000 (0:00:00.779) 1:10:32.365 **** 2026-02-18 07:02:24.346920 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.346939 | orchestrator | 2026-02-18 07:02:24.346952 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 07:02:24.346964 | orchestrator | Wednesday 18 February 2026 07:01:44 +0000 (0:00:00.753) 1:10:33.119 **** 2026-02-18 07:02:24.346976 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.346987 | orchestrator | 2026-02-18 07:02:24.346998 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 07:02:24.347010 | orchestrator | Wednesday 18 February 2026 07:01:45 +0000 (0:00:00.856) 1:10:33.975 **** 2026-02-18 07:02:24.347048 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.347061 | orchestrator | 2026-02-18 07:02:24.347072 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 07:02:24.347083 | orchestrator | Wednesday 18 February 2026 07:01:45 +0000 (0:00:00.813) 1:10:34.789 **** 2026-02-18 07:02:24.347094 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.347106 | orchestrator | 2026-02-18 07:02:24.347118 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-18 07:02:24.347129 | orchestrator | Wednesday 18 February 2026 07:01:46 +0000 (0:00:00.834) 1:10:35.623 **** 2026-02-18 07:02:24.347140 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347151 | orchestrator | 2026-02-18 07:02:24.347162 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-18 07:02:24.347173 | orchestrator | Wednesday 18 February 2026 07:01:47 +0000 (0:00:00.783) 1:10:36.406 **** 2026-02-18 07:02:24.347184 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347195 | orchestrator | 2026-02-18 07:02:24.347206 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-18 07:02:24.347217 | orchestrator | Wednesday 18 February 2026 07:01:48 +0000 (0:00:00.771) 1:10:37.177 **** 2026-02-18 07:02:24.347228 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347239 | orchestrator | 2026-02-18 07:02:24.347250 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-18 07:02:24.347261 | orchestrator | Wednesday 18 February 2026 07:01:49 +0000 (0:00:00.786) 1:10:37.964 **** 2026-02-18 07:02:24.347272 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347283 | orchestrator | 2026-02-18 07:02:24.347294 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-18 07:02:24.347307 | orchestrator | Wednesday 18 February 2026 07:01:49 +0000 (0:00:00.762) 1:10:38.726 **** 2026-02-18 07:02:24.347321 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347334 | orchestrator | 2026-02-18 07:02:24.347347 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-18 07:02:24.347359 | orchestrator | Wednesday 18 February 2026 07:01:50 +0000 (0:00:00.860) 1:10:39.587 **** 2026-02-18 07:02:24.347372 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347385 | orchestrator | 2026-02-18 07:02:24.347399 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-18 07:02:24.347412 | orchestrator | Wednesday 18 February 2026 07:01:51 +0000 (0:00:00.787) 1:10:40.375 **** 2026-02-18 07:02:24.347424 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347438 | orchestrator | 2026-02-18 07:02:24.347451 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-18 07:02:24.347464 | orchestrator | Wednesday 18 February 2026 07:01:52 +0000 (0:00:00.774) 1:10:41.149 **** 2026-02-18 07:02:24.347476 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347489 | orchestrator | 2026-02-18 07:02:24.347502 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-18 07:02:24.347515 | orchestrator | Wednesday 18 February 2026 07:01:53 +0000 (0:00:00.827) 1:10:41.976 **** 2026-02-18 07:02:24.347544 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347555 | orchestrator | 2026-02-18 07:02:24.347566 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-18 07:02:24.347577 | orchestrator | Wednesday 18 February 2026 07:01:53 +0000 (0:00:00.854) 1:10:42.831 **** 2026-02-18 07:02:24.347589 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347600 | orchestrator | 2026-02-18 07:02:24.347611 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-18 07:02:24.347621 | orchestrator | Wednesday 18 February 2026 07:01:54 +0000 (0:00:00.778) 1:10:43.609 **** 2026-02-18 07:02:24.347632 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347643 | orchestrator | 2026-02-18 07:02:24.347654 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-18 07:02:24.347674 | orchestrator | Wednesday 18 February 2026 07:01:55 +0000 (0:00:00.808) 1:10:44.417 **** 2026-02-18 07:02:24.347685 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347696 | orchestrator | 2026-02-18 07:02:24.347707 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-18 07:02:24.347718 | orchestrator | Wednesday 18 February 2026 07:01:56 +0000 (0:00:00.797) 1:10:45.215 **** 2026-02-18 07:02:24.347733 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.347744 | orchestrator | 2026-02-18 07:02:24.347755 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-18 07:02:24.347766 | orchestrator | Wednesday 18 February 2026 07:01:57 +0000 (0:00:01.573) 1:10:46.789 **** 2026-02-18 07:02:24.347777 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.347788 | orchestrator | 2026-02-18 07:02:24.347799 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-18 07:02:24.347827 | orchestrator | Wednesday 18 February 2026 07:01:59 +0000 (0:00:01.850) 1:10:48.639 **** 2026-02-18 07:02:24.347838 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-18 07:02:24.347850 | orchestrator | 2026-02-18 07:02:24.347862 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-18 07:02:24.347873 | orchestrator | Wednesday 18 February 2026 07:02:01 +0000 (0:00:01.239) 1:10:49.879 **** 2026-02-18 07:02:24.347884 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347895 | orchestrator | 2026-02-18 07:02:24.347906 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-18 07:02:24.347934 | orchestrator | Wednesday 18 February 2026 07:02:02 +0000 (0:00:01.160) 1:10:51.039 **** 2026-02-18 07:02:24.347945 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.347956 | orchestrator | 2026-02-18 07:02:24.347967 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-18 07:02:24.347978 | orchestrator | Wednesday 18 February 2026 07:02:03 +0000 (0:00:01.215) 1:10:52.255 **** 2026-02-18 07:02:24.347989 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-18 07:02:24.348000 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-18 07:02:24.348011 | orchestrator | 2026-02-18 07:02:24.348022 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-18 07:02:24.348033 | orchestrator | Wednesday 18 February 2026 07:02:05 +0000 (0:00:01.802) 1:10:54.057 **** 2026-02-18 07:02:24.348044 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.348055 | orchestrator | 2026-02-18 07:02:24.348065 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-18 07:02:24.348076 | orchestrator | Wednesday 18 February 2026 07:02:06 +0000 (0:00:01.483) 1:10:55.541 **** 2026-02-18 07:02:24.348087 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348098 | orchestrator | 2026-02-18 07:02:24.348109 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-18 07:02:24.348120 | orchestrator | Wednesday 18 February 2026 07:02:07 +0000 (0:00:01.153) 1:10:56.695 **** 2026-02-18 07:02:24.348131 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348142 | orchestrator | 2026-02-18 07:02:24.348153 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-18 07:02:24.348164 | orchestrator | Wednesday 18 February 2026 07:02:08 +0000 (0:00:00.819) 1:10:57.515 **** 2026-02-18 07:02:24.348175 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348186 | orchestrator | 2026-02-18 07:02:24.348197 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-18 07:02:24.348220 | orchestrator | Wednesday 18 February 2026 07:02:09 +0000 (0:00:00.827) 1:10:58.343 **** 2026-02-18 07:02:24.348242 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-18 07:02:24.348255 | orchestrator | 2026-02-18 07:02:24.348274 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-18 07:02:24.348292 | orchestrator | Wednesday 18 February 2026 07:02:10 +0000 (0:00:01.170) 1:10:59.513 **** 2026-02-18 07:02:24.348332 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.348353 | orchestrator | 2026-02-18 07:02:24.348371 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-18 07:02:24.348389 | orchestrator | Wednesday 18 February 2026 07:02:12 +0000 (0:00:01.734) 1:11:01.248 **** 2026-02-18 07:02:24.348407 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-18 07:02:24.348425 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-18 07:02:24.348443 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-18 07:02:24.348462 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348477 | orchestrator | 2026-02-18 07:02:24.348488 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-18 07:02:24.348499 | orchestrator | Wednesday 18 February 2026 07:02:13 +0000 (0:00:01.127) 1:11:02.376 **** 2026-02-18 07:02:24.348510 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348521 | orchestrator | 2026-02-18 07:02:24.348531 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-18 07:02:24.348542 | orchestrator | Wednesday 18 February 2026 07:02:14 +0000 (0:00:01.126) 1:11:03.502 **** 2026-02-18 07:02:24.348561 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348572 | orchestrator | 2026-02-18 07:02:24.348583 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-18 07:02:24.348594 | orchestrator | Wednesday 18 February 2026 07:02:15 +0000 (0:00:01.164) 1:11:04.667 **** 2026-02-18 07:02:24.348605 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348616 | orchestrator | 2026-02-18 07:02:24.348627 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-18 07:02:24.348637 | orchestrator | Wednesday 18 February 2026 07:02:16 +0000 (0:00:01.143) 1:11:05.811 **** 2026-02-18 07:02:24.348648 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348659 | orchestrator | 2026-02-18 07:02:24.348670 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-18 07:02:24.348682 | orchestrator | Wednesday 18 February 2026 07:02:18 +0000 (0:00:01.189) 1:11:07.000 **** 2026-02-18 07:02:24.348693 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348704 | orchestrator | 2026-02-18 07:02:24.348715 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-18 07:02:24.348725 | orchestrator | Wednesday 18 February 2026 07:02:18 +0000 (0:00:00.813) 1:11:07.814 **** 2026-02-18 07:02:24.348736 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.348747 | orchestrator | 2026-02-18 07:02:24.348758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-18 07:02:24.348769 | orchestrator | Wednesday 18 February 2026 07:02:21 +0000 (0:00:02.234) 1:11:10.048 **** 2026-02-18 07:02:24.348780 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:02:24.348791 | orchestrator | 2026-02-18 07:02:24.348853 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-18 07:02:24.348867 | orchestrator | Wednesday 18 February 2026 07:02:21 +0000 (0:00:00.802) 1:11:10.851 **** 2026-02-18 07:02:24.348879 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-18 07:02:24.348890 | orchestrator | 2026-02-18 07:02:24.348901 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-18 07:02:24.348912 | orchestrator | Wednesday 18 February 2026 07:02:23 +0000 (0:00:01.170) 1:11:12.021 **** 2026-02-18 07:02:24.348923 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:02:24.348934 | orchestrator | 2026-02-18 07:02:24.348945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-18 07:02:24.348965 | orchestrator | Wednesday 18 February 2026 07:02:24 +0000 (0:00:01.188) 1:11:13.209 **** 2026-02-18 07:03:05.844896 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.844979 | orchestrator | 2026-02-18 07:03:05.844987 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-18 07:03:05.845008 | orchestrator | Wednesday 18 February 2026 07:02:25 +0000 (0:00:01.144) 1:11:14.354 **** 2026-02-18 07:03:05.845013 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845017 | orchestrator | 2026-02-18 07:03:05.845021 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-18 07:03:05.845025 | orchestrator | Wednesday 18 February 2026 07:02:26 +0000 (0:00:01.169) 1:11:15.523 **** 2026-02-18 07:03:05.845029 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845033 | orchestrator | 2026-02-18 07:03:05.845037 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-18 07:03:05.845041 | orchestrator | Wednesday 18 February 2026 07:02:27 +0000 (0:00:01.161) 1:11:16.684 **** 2026-02-18 07:03:05.845044 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845048 | orchestrator | 2026-02-18 07:03:05.845052 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-18 07:03:05.845056 | orchestrator | Wednesday 18 February 2026 07:02:28 +0000 (0:00:01.163) 1:11:17.849 **** 2026-02-18 07:03:05.845060 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845064 | orchestrator | 2026-02-18 07:03:05.845067 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-18 07:03:05.845071 | orchestrator | Wednesday 18 February 2026 07:02:30 +0000 (0:00:01.161) 1:11:19.010 **** 2026-02-18 07:03:05.845075 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845079 | orchestrator | 2026-02-18 07:03:05.845083 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-18 07:03:05.845087 | orchestrator | Wednesday 18 February 2026 07:02:31 +0000 (0:00:01.137) 1:11:20.148 **** 2026-02-18 07:03:05.845090 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845095 | orchestrator | 2026-02-18 07:03:05.845099 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-18 07:03:05.845102 | orchestrator | Wednesday 18 February 2026 07:02:32 +0000 (0:00:01.146) 1:11:21.294 **** 2026-02-18 07:03:05.845106 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:03:05.845111 | orchestrator | 2026-02-18 07:03:05.845115 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-18 07:03:05.845119 | orchestrator | Wednesday 18 February 2026 07:02:33 +0000 (0:00:00.955) 1:11:22.250 **** 2026-02-18 07:03:05.845123 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-18 07:03:05.845128 | orchestrator | 2026-02-18 07:03:05.845132 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-18 07:03:05.845135 | orchestrator | Wednesday 18 February 2026 07:02:34 +0000 (0:00:01.176) 1:11:23.427 **** 2026-02-18 07:03:05.845140 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-18 07:03:05.845144 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-18 07:03:05.845148 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-18 07:03:05.845151 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-18 07:03:05.845155 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-18 07:03:05.845159 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-18 07:03:05.845163 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-18 07:03:05.845166 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-18 07:03:05.845170 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-18 07:03:05.845183 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-18 07:03:05.845187 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-18 07:03:05.845191 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-18 07:03:05.845195 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-18 07:03:05.845199 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-18 07:03:05.845202 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-18 07:03:05.845210 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-18 07:03:05.845214 | orchestrator | 2026-02-18 07:03:05.845218 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-18 07:03:05.845221 | orchestrator | Wednesday 18 February 2026 07:02:40 +0000 (0:00:06.204) 1:11:29.631 **** 2026-02-18 07:03:05.845225 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-18 07:03:05.845229 | orchestrator | 2026-02-18 07:03:05.845233 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-18 07:03:05.845237 | orchestrator | Wednesday 18 February 2026 07:02:41 +0000 (0:00:01.126) 1:11:30.758 **** 2026-02-18 07:03:05.845241 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 07:03:05.845245 | orchestrator | 2026-02-18 07:03:05.845249 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-18 07:03:05.845253 | orchestrator | Wednesday 18 February 2026 07:02:43 +0000 (0:00:01.483) 1:11:32.242 **** 2026-02-18 07:03:05.845257 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 07:03:05.845261 | orchestrator | 2026-02-18 07:03:05.845265 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-18 07:03:05.845268 | orchestrator | Wednesday 18 February 2026 07:02:45 +0000 (0:00:01.640) 1:11:33.883 **** 2026-02-18 07:03:05.845272 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845276 | orchestrator | 2026-02-18 07:03:05.845280 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-18 07:03:05.845293 | orchestrator | Wednesday 18 February 2026 07:02:45 +0000 (0:00:00.812) 1:11:34.695 **** 2026-02-18 07:03:05.845297 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845301 | orchestrator | 2026-02-18 07:03:05.845305 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-18 07:03:05.845309 | orchestrator | Wednesday 18 February 2026 07:02:46 +0000 (0:00:00.800) 1:11:35.496 **** 2026-02-18 07:03:05.845312 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845316 | orchestrator | 2026-02-18 07:03:05.845320 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-18 07:03:05.845324 | orchestrator | Wednesday 18 February 2026 07:02:47 +0000 (0:00:00.839) 1:11:36.336 **** 2026-02-18 07:03:05.845328 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845332 | orchestrator | 2026-02-18 07:03:05.845336 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-18 07:03:05.845339 | orchestrator | Wednesday 18 February 2026 07:02:48 +0000 (0:00:00.797) 1:11:37.134 **** 2026-02-18 07:03:05.845343 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845347 | orchestrator | 2026-02-18 07:03:05.845351 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-18 07:03:05.845355 | orchestrator | Wednesday 18 February 2026 07:02:49 +0000 (0:00:00.885) 1:11:38.019 **** 2026-02-18 07:03:05.845359 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845362 | orchestrator | 2026-02-18 07:03:05.845366 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-18 07:03:05.845370 | orchestrator | Wednesday 18 February 2026 07:02:50 +0000 (0:00:00.870) 1:11:38.890 **** 2026-02-18 07:03:05.845374 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845378 | orchestrator | 2026-02-18 07:03:05.845381 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-18 07:03:05.845385 | orchestrator | Wednesday 18 February 2026 07:02:50 +0000 (0:00:00.819) 1:11:39.709 **** 2026-02-18 07:03:05.845389 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845393 | orchestrator | 2026-02-18 07:03:05.845396 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-18 07:03:05.845404 | orchestrator | Wednesday 18 February 2026 07:02:51 +0000 (0:00:00.798) 1:11:40.508 **** 2026-02-18 07:03:05.845407 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845411 | orchestrator | 2026-02-18 07:03:05.845415 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-18 07:03:05.845419 | orchestrator | Wednesday 18 February 2026 07:02:52 +0000 (0:00:00.793) 1:11:41.302 **** 2026-02-18 07:03:05.845423 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845427 | orchestrator | 2026-02-18 07:03:05.845430 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-18 07:03:05.845434 | orchestrator | Wednesday 18 February 2026 07:02:53 +0000 (0:00:00.779) 1:11:42.081 **** 2026-02-18 07:03:05.845438 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845442 | orchestrator | 2026-02-18 07:03:05.845446 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-18 07:03:05.845450 | orchestrator | Wednesday 18 February 2026 07:02:54 +0000 (0:00:00.802) 1:11:42.884 **** 2026-02-18 07:03:05.845453 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-18 07:03:05.845457 | orchestrator | 2026-02-18 07:03:05.845461 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-18 07:03:05.845465 | orchestrator | Wednesday 18 February 2026 07:02:58 +0000 (0:00:04.053) 1:11:46.937 **** 2026-02-18 07:03:05.845471 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 07:03:05.845475 | orchestrator | 2026-02-18 07:03:05.845479 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-18 07:03:05.845483 | orchestrator | Wednesday 18 February 2026 07:02:59 +0000 (0:00:00.947) 1:11:47.885 **** 2026-02-18 07:03:05.845488 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-18 07:03:05.845495 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-18 07:03:05.845499 | orchestrator | 2026-02-18 07:03:05.845503 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-18 07:03:05.845507 | orchestrator | Wednesday 18 February 2026 07:03:03 +0000 (0:00:04.417) 1:11:52.302 **** 2026-02-18 07:03:05.845511 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845515 | orchestrator | 2026-02-18 07:03:05.845518 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-18 07:03:05.845522 | orchestrator | Wednesday 18 February 2026 07:03:04 +0000 (0:00:00.800) 1:11:53.103 **** 2026-02-18 07:03:05.845526 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845530 | orchestrator | 2026-02-18 07:03:05.845533 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-18 07:03:05.845537 | orchestrator | Wednesday 18 February 2026 07:03:05 +0000 (0:00:00.790) 1:11:53.893 **** 2026-02-18 07:03:05.845541 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:03:05.845545 | orchestrator | 2026-02-18 07:03:05.845549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-18 07:03:05.845556 | orchestrator | Wednesday 18 February 2026 07:03:05 +0000 (0:00:00.813) 1:11:54.707 **** 2026-02-18 07:04:10.900954 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901048 | orchestrator | 2026-02-18 07:04:10.901059 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-18 07:04:10.901068 | orchestrator | Wednesday 18 February 2026 07:03:06 +0000 (0:00:00.815) 1:11:55.522 **** 2026-02-18 07:04:10.901094 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901101 | orchestrator | 2026-02-18 07:04:10.901107 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-18 07:04:10.901114 | orchestrator | Wednesday 18 February 2026 07:03:07 +0000 (0:00:00.799) 1:11:56.322 **** 2026-02-18 07:04:10.901120 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:04:10.901127 | orchestrator | 2026-02-18 07:04:10.901134 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-18 07:04:10.901140 | orchestrator | Wednesday 18 February 2026 07:03:08 +0000 (0:00:00.997) 1:11:57.319 **** 2026-02-18 07:04:10.901146 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 07:04:10.901153 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 07:04:10.901159 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 07:04:10.901165 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901171 | orchestrator | 2026-02-18 07:04:10.901177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-18 07:04:10.901184 | orchestrator | Wednesday 18 February 2026 07:03:09 +0000 (0:00:01.086) 1:11:58.406 **** 2026-02-18 07:04:10.901190 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 07:04:10.901196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 07:04:10.901202 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 07:04:10.901208 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901214 | orchestrator | 2026-02-18 07:04:10.901220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-18 07:04:10.901227 | orchestrator | Wednesday 18 February 2026 07:03:10 +0000 (0:00:01.185) 1:11:59.591 **** 2026-02-18 07:04:10.901233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-18 07:04:10.901239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-18 07:04:10.901245 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-18 07:04:10.901251 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901258 | orchestrator | 2026-02-18 07:04:10.901264 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-18 07:04:10.901270 | orchestrator | Wednesday 18 February 2026 07:03:11 +0000 (0:00:01.078) 1:12:00.670 **** 2026-02-18 07:04:10.901276 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:04:10.901282 | orchestrator | 2026-02-18 07:04:10.901288 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-18 07:04:10.901295 | orchestrator | Wednesday 18 February 2026 07:03:12 +0000 (0:00:00.803) 1:12:01.473 **** 2026-02-18 07:04:10.901301 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-18 07:04:10.901307 | orchestrator | 2026-02-18 07:04:10.901313 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-18 07:04:10.901319 | orchestrator | Wednesday 18 February 2026 07:03:13 +0000 (0:00:00.982) 1:12:02.456 **** 2026-02-18 07:04:10.901326 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:04:10.901332 | orchestrator | 2026-02-18 07:04:10.901338 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-18 07:04:10.901344 | orchestrator | Wednesday 18 February 2026 07:03:14 +0000 (0:00:01.391) 1:12:03.848 **** 2026-02-18 07:04:10.901361 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-18 07:04:10.901368 | orchestrator | 2026-02-18 07:04:10.901374 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 07:04:10.901380 | orchestrator | Wednesday 18 February 2026 07:03:16 +0000 (0:00:01.136) 1:12:04.985 **** 2026-02-18 07:04:10.901387 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 07:04:10.901393 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 07:04:10.901399 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 07:04:10.901411 | orchestrator | 2026-02-18 07:04:10.901418 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 07:04:10.901424 | orchestrator | Wednesday 18 February 2026 07:03:19 +0000 (0:00:03.170) 1:12:08.156 **** 2026-02-18 07:04:10.901430 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-18 07:04:10.901436 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-18 07:04:10.901443 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:04:10.901449 | orchestrator | 2026-02-18 07:04:10.901456 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-18 07:04:10.901462 | orchestrator | Wednesday 18 February 2026 07:03:21 +0000 (0:00:02.062) 1:12:10.218 **** 2026-02-18 07:04:10.901468 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901475 | orchestrator | 2026-02-18 07:04:10.901481 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-18 07:04:10.901487 | orchestrator | Wednesday 18 February 2026 07:03:22 +0000 (0:00:00.883) 1:12:11.101 **** 2026-02-18 07:04:10.901493 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-18 07:04:10.901500 | orchestrator | 2026-02-18 07:04:10.901507 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-18 07:04:10.901514 | orchestrator | Wednesday 18 February 2026 07:03:23 +0000 (0:00:01.145) 1:12:12.247 **** 2026-02-18 07:04:10.901523 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 07:04:10.901531 | orchestrator | 2026-02-18 07:04:10.901538 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-18 07:04:10.901545 | orchestrator | Wednesday 18 February 2026 07:03:24 +0000 (0:00:01.584) 1:12:13.832 **** 2026-02-18 07:04:10.901564 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 07:04:10.901573 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-18 07:04:10.901581 | orchestrator | 2026-02-18 07:04:10.901588 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-18 07:04:10.901596 | orchestrator | Wednesday 18 February 2026 07:03:29 +0000 (0:00:05.019) 1:12:18.851 **** 2026-02-18 07:04:10.901603 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-18 07:04:10.901611 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-18 07:04:10.901618 | orchestrator | 2026-02-18 07:04:10.901626 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-18 07:04:10.901634 | orchestrator | Wednesday 18 February 2026 07:03:33 +0000 (0:00:03.041) 1:12:21.893 **** 2026-02-18 07:04:10.901641 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-18 07:04:10.901648 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:04:10.901656 | orchestrator | 2026-02-18 07:04:10.901664 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-18 07:04:10.901671 | orchestrator | Wednesday 18 February 2026 07:03:34 +0000 (0:00:01.642) 1:12:23.536 **** 2026-02-18 07:04:10.901679 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-18 07:04:10.901686 | orchestrator | 2026-02-18 07:04:10.901694 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-18 07:04:10.901701 | orchestrator | Wednesday 18 February 2026 07:03:35 +0000 (0:00:01.185) 1:12:24.722 **** 2026-02-18 07:04:10.901709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901752 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901760 | orchestrator | 2026-02-18 07:04:10.901767 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-18 07:04:10.901801 | orchestrator | Wednesday 18 February 2026 07:03:37 +0000 (0:00:02.012) 1:12:26.734 **** 2026-02-18 07:04:10.901808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-18 07:04:10.901843 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901849 | orchestrator | 2026-02-18 07:04:10.901855 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-18 07:04:10.901861 | orchestrator | Wednesday 18 February 2026 07:03:39 +0000 (0:00:02.025) 1:12:28.760 **** 2026-02-18 07:04:10.901868 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:04:10.901874 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:04:10.901881 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:04:10.901887 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:04:10.901895 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-18 07:04:10.901901 | orchestrator | 2026-02-18 07:04:10.901907 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-18 07:04:10.901913 | orchestrator | Wednesday 18 February 2026 07:04:10 +0000 (0:00:30.168) 1:12:58.928 **** 2026-02-18 07:04:10.901919 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:04:10.901926 | orchestrator | 2026-02-18 07:04:10.901932 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-18 07:04:10.901942 | orchestrator | Wednesday 18 February 2026 07:04:10 +0000 (0:00:00.836) 1:12:59.765 **** 2026-02-18 07:05:04.167918 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:05:04.168004 | orchestrator | 2026-02-18 07:05:04.168013 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-18 07:05:04.168019 | orchestrator | Wednesday 18 February 2026 07:04:11 +0000 (0:00:00.804) 1:13:00.569 **** 2026-02-18 07:05:04.168025 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-18 07:05:04.168031 | orchestrator | 2026-02-18 07:05:04.168036 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-18 07:05:04.168040 | orchestrator | Wednesday 18 February 2026 07:04:12 +0000 (0:00:01.096) 1:13:01.665 **** 2026-02-18 07:05:04.168045 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-18 07:05:04.168065 | orchestrator | 2026-02-18 07:05:04.168070 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-18 07:05:04.168075 | orchestrator | Wednesday 18 February 2026 07:04:13 +0000 (0:00:01.132) 1:13:02.798 **** 2026-02-18 07:05:04.168080 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168085 | orchestrator | 2026-02-18 07:05:04.168090 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-18 07:05:04.168095 | orchestrator | Wednesday 18 February 2026 07:04:16 +0000 (0:00:02.125) 1:13:04.923 **** 2026-02-18 07:05:04.168100 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168104 | orchestrator | 2026-02-18 07:05:04.168109 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-18 07:05:04.168115 | orchestrator | Wednesday 18 February 2026 07:04:17 +0000 (0:00:01.921) 1:13:06.844 **** 2026-02-18 07:05:04.168119 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168124 | orchestrator | 2026-02-18 07:05:04.168129 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-18 07:05:04.168134 | orchestrator | Wednesday 18 February 2026 07:04:20 +0000 (0:00:02.220) 1:13:09.065 **** 2026-02-18 07:05:04.168139 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-18 07:05:04.168145 | orchestrator | 2026-02-18 07:05:04.168150 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-18 07:05:04.168154 | orchestrator | skipping: no hosts matched 2026-02-18 07:05:04.168159 | orchestrator | 2026-02-18 07:05:04.168164 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-18 07:05:04.168168 | orchestrator | skipping: no hosts matched 2026-02-18 07:05:04.168173 | orchestrator | 2026-02-18 07:05:04.168178 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-18 07:05:04.168182 | orchestrator | skipping: no hosts matched 2026-02-18 07:05:04.168187 | orchestrator | 2026-02-18 07:05:04.168191 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-18 07:05:04.168196 | orchestrator | 2026-02-18 07:05:04.168201 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-18 07:05:04.168205 | orchestrator | Wednesday 18 February 2026 07:04:24 +0000 (0:00:04.149) 1:13:13.214 **** 2026-02-18 07:05:04.168210 | orchestrator | changed: [testbed-node-0] 2026-02-18 07:05:04.168215 | orchestrator | changed: [testbed-node-1] 2026-02-18 07:05:04.168220 | orchestrator | changed: [testbed-node-2] 2026-02-18 07:05:04.168224 | orchestrator | changed: [testbed-node-3] 2026-02-18 07:05:04.168229 | orchestrator | changed: [testbed-node-4] 2026-02-18 07:05:04.168233 | orchestrator | changed: [testbed-node-5] 2026-02-18 07:05:04.168238 | orchestrator | 2026-02-18 07:05:04.168242 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-18 07:05:04.168258 | orchestrator | Wednesday 18 February 2026 07:04:27 +0000 (0:00:02.810) 1:13:16.025 **** 2026-02-18 07:05:04.168262 | orchestrator | changed: [testbed-node-0] 2026-02-18 07:05:04.168267 | orchestrator | changed: [testbed-node-2] 2026-02-18 07:05:04.168272 | orchestrator | changed: [testbed-node-1] 2026-02-18 07:05:04.168276 | orchestrator | changed: [testbed-node-3] 2026-02-18 07:05:04.168281 | orchestrator | changed: [testbed-node-4] 2026-02-18 07:05:04.168285 | orchestrator | changed: [testbed-node-5] 2026-02-18 07:05:04.168290 | orchestrator | 2026-02-18 07:05:04.168295 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 07:05:04.168299 | orchestrator | Wednesday 18 February 2026 07:04:30 +0000 (0:00:03.359) 1:13:19.384 **** 2026-02-18 07:05:04.168304 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:05:04.168308 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:05:04.168313 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:05:04.168318 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:05:04.168322 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:05:04.168327 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168331 | orchestrator | 2026-02-18 07:05:04.168340 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 07:05:04.168345 | orchestrator | Wednesday 18 February 2026 07:04:32 +0000 (0:00:02.074) 1:13:21.458 **** 2026-02-18 07:05:04.168350 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:05:04.168354 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:05:04.168359 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:05:04.168363 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:05:04.168368 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:05:04.168372 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168377 | orchestrator | 2026-02-18 07:05:04.168382 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-18 07:05:04.168386 | orchestrator | Wednesday 18 February 2026 07:04:34 +0000 (0:00:02.221) 1:13:23.680 **** 2026-02-18 07:05:04.168392 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 07:05:04.168398 | orchestrator | 2026-02-18 07:05:04.168403 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-18 07:05:04.168407 | orchestrator | Wednesday 18 February 2026 07:04:36 +0000 (0:00:02.172) 1:13:25.853 **** 2026-02-18 07:05:04.168412 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 07:05:04.168417 | orchestrator | 2026-02-18 07:05:04.168431 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-18 07:05:04.168436 | orchestrator | Wednesday 18 February 2026 07:04:39 +0000 (0:00:02.283) 1:13:28.137 **** 2026-02-18 07:05:04.168441 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:05:04.168445 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:05:04.168450 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:05:04.168454 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:05:04.168459 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:05:04.168463 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:05:04.168468 | orchestrator | 2026-02-18 07:05:04.168473 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-18 07:05:04.168477 | orchestrator | Wednesday 18 February 2026 07:04:41 +0000 (0:00:02.132) 1:13:30.269 **** 2026-02-18 07:05:04.168482 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:05:04.168487 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:05:04.168492 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:05:04.168498 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:05:04.168504 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:05:04.168509 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168514 | orchestrator | 2026-02-18 07:05:04.168520 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-18 07:05:04.168525 | orchestrator | Wednesday 18 February 2026 07:04:43 +0000 (0:00:02.525) 1:13:32.795 **** 2026-02-18 07:05:04.168531 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:05:04.168536 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:05:04.168542 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:05:04.168547 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:05:04.168552 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:05:04.168558 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168563 | orchestrator | 2026-02-18 07:05:04.168569 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-18 07:05:04.168574 | orchestrator | Wednesday 18 February 2026 07:04:46 +0000 (0:00:02.389) 1:13:35.185 **** 2026-02-18 07:05:04.168579 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:05:04.168583 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:05:04.168588 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:05:04.168592 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:05:04.168597 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:05:04.168602 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168606 | orchestrator | 2026-02-18 07:05:04.168611 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-18 07:05:04.168619 | orchestrator | Wednesday 18 February 2026 07:04:48 +0000 (0:00:02.530) 1:13:37.715 **** 2026-02-18 07:05:04.168624 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:05:04.168628 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:05:04.168633 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:05:04.168637 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:05:04.168642 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:05:04.168646 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:05:04.168651 | orchestrator | 2026-02-18 07:05:04.168656 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-18 07:05:04.168660 | orchestrator | Wednesday 18 February 2026 07:04:51 +0000 (0:00:02.239) 1:13:39.955 **** 2026-02-18 07:05:04.168665 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:05:04.168669 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:05:04.168674 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:05:04.168679 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:05:04.168683 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:05:04.168688 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:05:04.168692 | orchestrator | 2026-02-18 07:05:04.168697 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-18 07:05:04.168701 | orchestrator | Wednesday 18 February 2026 07:04:52 +0000 (0:00:01.819) 1:13:41.774 **** 2026-02-18 07:05:04.168710 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:05:04.168715 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:05:04.168719 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:05:04.168724 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:05:04.168728 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:05:04.168733 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:05:04.168737 | orchestrator | 2026-02-18 07:05:04.168742 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-18 07:05:04.168747 | orchestrator | Wednesday 18 February 2026 07:04:55 +0000 (0:00:02.191) 1:13:43.966 **** 2026-02-18 07:05:04.168751 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:05:04.168756 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:05:04.168779 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:05:04.168784 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:05:04.168788 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:05:04.168793 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168797 | orchestrator | 2026-02-18 07:05:04.168802 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-18 07:05:04.168807 | orchestrator | Wednesday 18 February 2026 07:04:57 +0000 (0:00:02.299) 1:13:46.265 **** 2026-02-18 07:05:04.168811 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:05:04.168816 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:05:04.168821 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:05:04.168825 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:05:04.168830 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:05:04.168834 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:05:04.168839 | orchestrator | 2026-02-18 07:05:04.168843 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-18 07:05:04.168848 | orchestrator | Wednesday 18 February 2026 07:04:59 +0000 (0:00:02.519) 1:13:48.785 **** 2026-02-18 07:05:04.168852 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:05:04.168857 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:05:04.168862 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:05:04.168866 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:05:04.168871 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:05:04.168875 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:05:04.168880 | orchestrator | 2026-02-18 07:05:04.168885 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-18 07:05:04.168889 | orchestrator | Wednesday 18 February 2026 07:05:01 +0000 (0:00:01.908) 1:13:50.693 **** 2026-02-18 07:05:04.168894 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:05:04.168898 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:05:04.168906 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:05:04.168911 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:05:04.168916 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:05:04.168920 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:05:04.168925 | orchestrator | 2026-02-18 07:05:04.168932 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-18 07:06:00.629148 | orchestrator | Wednesday 18 February 2026 07:05:04 +0000 (0:00:02.325) 1:13:53.019 **** 2026-02-18 07:06:00.629271 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.629288 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.629299 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.629311 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.629323 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.629335 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.629346 | orchestrator | 2026-02-18 07:06:00.629358 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-18 07:06:00.629370 | orchestrator | Wednesday 18 February 2026 07:05:06 +0000 (0:00:01.870) 1:13:54.889 **** 2026-02-18 07:06:00.629381 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.629392 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.629402 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.629413 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.629424 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.629435 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.629446 | orchestrator | 2026-02-18 07:06:00.629457 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-18 07:06:00.629468 | orchestrator | Wednesday 18 February 2026 07:05:08 +0000 (0:00:02.079) 1:13:56.969 **** 2026-02-18 07:06:00.629479 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.629490 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.629501 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.629512 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.629522 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.629533 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.629544 | orchestrator | 2026-02-18 07:06:00.629555 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-18 07:06:00.629566 | orchestrator | Wednesday 18 February 2026 07:05:09 +0000 (0:00:01.841) 1:13:58.811 **** 2026-02-18 07:06:00.629577 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.629588 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.629598 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.629609 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:00.629620 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:00.629631 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:00.629642 | orchestrator | 2026-02-18 07:06:00.629653 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-18 07:06:00.629664 | orchestrator | Wednesday 18 February 2026 07:05:11 +0000 (0:00:02.031) 1:14:00.843 **** 2026-02-18 07:06:00.629675 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.629689 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.629701 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.629715 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:00.629745 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:00.629787 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:00.629801 | orchestrator | 2026-02-18 07:06:00.629814 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-18 07:06:00.629827 | orchestrator | Wednesday 18 February 2026 07:05:13 +0000 (0:00:01.725) 1:14:02.569 **** 2026-02-18 07:06:00.629839 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.629852 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.629865 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.629878 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:00.629891 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:00.629904 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:00.629939 | orchestrator | 2026-02-18 07:06:00.629952 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-18 07:06:00.629979 | orchestrator | Wednesday 18 February 2026 07:05:15 +0000 (0:00:01.971) 1:14:04.541 **** 2026-02-18 07:06:00.629993 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.630008 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.630111 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.630130 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.630189 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.630208 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.630225 | orchestrator | 2026-02-18 07:06:00.630242 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-18 07:06:00.630260 | orchestrator | Wednesday 18 February 2026 07:05:17 +0000 (0:00:02.132) 1:14:06.673 **** 2026-02-18 07:06:00.630279 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.630295 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.630311 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.630329 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.630347 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.630366 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.630383 | orchestrator | 2026-02-18 07:06:00.630402 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-18 07:06:00.630419 | orchestrator | Wednesday 18 February 2026 07:05:20 +0000 (0:00:02.295) 1:14:08.968 **** 2026-02-18 07:06:00.630438 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.630456 | orchestrator | 2026-02-18 07:06:00.630473 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-18 07:06:00.630492 | orchestrator | Wednesday 18 February 2026 07:05:23 +0000 (0:00:03.105) 1:14:12.073 **** 2026-02-18 07:06:00.630511 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.630530 | orchestrator | 2026-02-18 07:06:00.630548 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-18 07:06:00.630566 | orchestrator | Wednesday 18 February 2026 07:05:26 +0000 (0:00:03.129) 1:14:15.203 **** 2026-02-18 07:06:00.630581 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.630592 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.630603 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.630614 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.630624 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.630635 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.630646 | orchestrator | 2026-02-18 07:06:00.630657 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-18 07:06:00.630667 | orchestrator | Wednesday 18 February 2026 07:05:29 +0000 (0:00:02.723) 1:14:17.927 **** 2026-02-18 07:06:00.630679 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.630690 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.630701 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.630711 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.630722 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.630733 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.630743 | orchestrator | 2026-02-18 07:06:00.630786 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-18 07:06:00.630822 | orchestrator | Wednesday 18 February 2026 07:05:31 +0000 (0:00:02.633) 1:14:20.560 **** 2026-02-18 07:06:00.630835 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-18 07:06:00.630847 | orchestrator | 2026-02-18 07:06:00.630859 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-18 07:06:00.630870 | orchestrator | Wednesday 18 February 2026 07:05:34 +0000 (0:00:02.602) 1:14:23.163 **** 2026-02-18 07:06:00.630881 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.630892 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.630902 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.630913 | orchestrator | ok: [testbed-node-3] 2026-02-18 07:06:00.630924 | orchestrator | ok: [testbed-node-5] 2026-02-18 07:06:00.630950 | orchestrator | ok: [testbed-node-4] 2026-02-18 07:06:00.630961 | orchestrator | 2026-02-18 07:06:00.630972 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-18 07:06:00.630982 | orchestrator | Wednesday 18 February 2026 07:05:36 +0000 (0:00:02.630) 1:14:25.794 **** 2026-02-18 07:06:00.630994 | orchestrator | changed: [testbed-node-3] 2026-02-18 07:06:00.631005 | orchestrator | changed: [testbed-node-0] 2026-02-18 07:06:00.631016 | orchestrator | changed: [testbed-node-2] 2026-02-18 07:06:00.631027 | orchestrator | changed: [testbed-node-1] 2026-02-18 07:06:00.631038 | orchestrator | changed: [testbed-node-4] 2026-02-18 07:06:00.631048 | orchestrator | changed: [testbed-node-5] 2026-02-18 07:06:00.631059 | orchestrator | 2026-02-18 07:06:00.631070 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-18 07:06:00.631081 | orchestrator | 2026-02-18 07:06:00.631092 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 07:06:00.631103 | orchestrator | Wednesday 18 February 2026 07:05:41 +0000 (0:00:04.466) 1:14:30.261 **** 2026-02-18 07:06:00.631114 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.631124 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.631135 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.631146 | orchestrator | 2026-02-18 07:06:00.631157 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 07:06:00.631168 | orchestrator | Wednesday 18 February 2026 07:05:43 +0000 (0:00:02.017) 1:14:32.279 **** 2026-02-18 07:06:00.631179 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.631190 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:00.631200 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:00.631211 | orchestrator | 2026-02-18 07:06:00.631222 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-18 07:06:00.631234 | orchestrator | Wednesday 18 February 2026 07:05:44 +0000 (0:00:01.532) 1:14:33.812 **** 2026-02-18 07:06:00.631245 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:00.631256 | orchestrator | 2026-02-18 07:06:00.631267 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-18 07:06:00.631278 | orchestrator | Wednesday 18 February 2026 07:05:47 +0000 (0:00:02.253) 1:14:36.066 **** 2026-02-18 07:06:00.631288 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.631299 | orchestrator | 2026-02-18 07:06:00.631310 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-18 07:06:00.631321 | orchestrator | 2026-02-18 07:06:00.631332 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-18 07:06:00.631343 | orchestrator | Wednesday 18 February 2026 07:05:49 +0000 (0:00:02.520) 1:14:38.586 **** 2026-02-18 07:06:00.631354 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.631365 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.631385 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.631397 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:00.631407 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:00.631418 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:00.631429 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:00.631440 | orchestrator | 2026-02-18 07:06:00.631451 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 07:06:00.631462 | orchestrator | Wednesday 18 February 2026 07:05:52 +0000 (0:00:02.319) 1:14:40.905 **** 2026-02-18 07:06:00.631473 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.631484 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.631495 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.631505 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:00.631516 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:00.631526 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:00.631537 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:00.631548 | orchestrator | 2026-02-18 07:06:00.631559 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-18 07:06:00.631576 | orchestrator | Wednesday 18 February 2026 07:05:54 +0000 (0:00:02.558) 1:14:43.464 **** 2026-02-18 07:06:00.631587 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.631598 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.631609 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.631620 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:00.631630 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:00.631641 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:00.631652 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:00.631663 | orchestrator | 2026-02-18 07:06:00.631673 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-18 07:06:00.631685 | orchestrator | Wednesday 18 February 2026 07:05:57 +0000 (0:00:02.864) 1:14:46.328 **** 2026-02-18 07:06:00.631696 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.631706 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.631717 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.631728 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:00.631739 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:00.631798 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:00.631810 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:00.631821 | orchestrator | 2026-02-18 07:06:00.631832 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-18 07:06:00.631843 | orchestrator | Wednesday 18 February 2026 07:05:59 +0000 (0:00:02.539) 1:14:48.868 **** 2026-02-18 07:06:00.631854 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:00.631865 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:00.631876 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:00.631896 | orchestrator | skipping: [testbed-node-3] 2026-02-18 07:06:51.435468 | orchestrator | skipping: [testbed-node-4] 2026-02-18 07:06:51.435560 | orchestrator | skipping: [testbed-node-5] 2026-02-18 07:06:51.435570 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435577 | orchestrator | 2026-02-18 07:06:51.435585 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-18 07:06:51.435593 | orchestrator | 2026-02-18 07:06:51.435599 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-18 07:06:51.435607 | orchestrator | Wednesday 18 February 2026 07:06:03 +0000 (0:00:03.073) 1:14:51.942 **** 2026-02-18 07:06:51.435614 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-18 07:06:51.435622 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-18 07:06:51.435627 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-18 07:06:51.435631 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435635 | orchestrator | 2026-02-18 07:06:51.435639 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-18 07:06:51.435643 | orchestrator | Wednesday 18 February 2026 07:06:04 +0000 (0:00:01.233) 1:14:53.176 **** 2026-02-18 07:06:51.435647 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435651 | orchestrator | 2026-02-18 07:06:51.435655 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-18 07:06:51.435660 | orchestrator | Wednesday 18 February 2026 07:06:05 +0000 (0:00:01.107) 1:14:54.283 **** 2026-02-18 07:06:51.435664 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435667 | orchestrator | 2026-02-18 07:06:51.435671 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-18 07:06:51.435675 | orchestrator | Wednesday 18 February 2026 07:06:06 +0000 (0:00:01.105) 1:14:55.388 **** 2026-02-18 07:06:51.435679 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435683 | orchestrator | 2026-02-18 07:06:51.435686 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-18 07:06:51.435690 | orchestrator | Wednesday 18 February 2026 07:06:07 +0000 (0:00:01.124) 1:14:56.513 **** 2026-02-18 07:06:51.435694 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435698 | orchestrator | 2026-02-18 07:06:51.435702 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-18 07:06:51.435720 | orchestrator | Wednesday 18 February 2026 07:06:09 +0000 (0:00:01.372) 1:14:57.886 **** 2026-02-18 07:06:51.435724 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-18 07:06:51.435729 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-18 07:06:51.435732 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435736 | orchestrator | 2026-02-18 07:06:51.435740 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-18 07:06:51.435781 | orchestrator | Wednesday 18 February 2026 07:06:10 +0000 (0:00:01.121) 1:14:59.007 **** 2026-02-18 07:06:51.435785 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435789 | orchestrator | 2026-02-18 07:06:51.435793 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-18 07:06:51.435797 | orchestrator | Wednesday 18 February 2026 07:06:11 +0000 (0:00:01.116) 1:15:00.124 **** 2026-02-18 07:06:51.435800 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435804 | orchestrator | 2026-02-18 07:06:51.435808 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-18 07:06:51.435812 | orchestrator | Wednesday 18 February 2026 07:06:12 +0000 (0:00:01.135) 1:15:01.260 **** 2026-02-18 07:06:51.435825 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435829 | orchestrator | 2026-02-18 07:06:51.435833 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-18 07:06:51.435836 | orchestrator | Wednesday 18 February 2026 07:06:13 +0000 (0:00:01.116) 1:15:02.376 **** 2026-02-18 07:06:51.435840 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-18 07:06:51.435844 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-18 07:06:51.435848 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435852 | orchestrator | 2026-02-18 07:06:51.435855 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-18 07:06:51.435859 | orchestrator | Wednesday 18 February 2026 07:06:14 +0000 (0:00:01.139) 1:15:03.516 **** 2026-02-18 07:06:51.435863 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435866 | orchestrator | 2026-02-18 07:06:51.435870 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-18 07:06:51.435874 | orchestrator | Wednesday 18 February 2026 07:06:15 +0000 (0:00:01.138) 1:15:04.655 **** 2026-02-18 07:06:51.435878 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435882 | orchestrator | 2026-02-18 07:06:51.435885 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-18 07:06:51.435889 | orchestrator | Wednesday 18 February 2026 07:06:16 +0000 (0:00:01.146) 1:15:05.801 **** 2026-02-18 07:06:51.435893 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435897 | orchestrator | 2026-02-18 07:06:51.435900 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-18 07:06:51.435904 | orchestrator | Wednesday 18 February 2026 07:06:18 +0000 (0:00:01.326) 1:15:07.128 **** 2026-02-18 07:06:51.435908 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:06:51.435912 | orchestrator | 2026-02-18 07:06:51.435916 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-18 07:06:51.435920 | orchestrator | 2026-02-18 07:06:51.435923 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-18 07:06:51.435927 | orchestrator | Wednesday 18 February 2026 07:06:19 +0000 (0:00:01.627) 1:15:08.756 **** 2026-02-18 07:06:51.435931 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:51.435935 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:51.435938 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:51.435943 | orchestrator | 2026-02-18 07:06:51.435946 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-18 07:06:51.435950 | orchestrator | Wednesday 18 February 2026 07:06:21 +0000 (0:00:01.470) 1:15:10.227 **** 2026-02-18 07:06:51.435954 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:51.435961 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:51.435976 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:51.435980 | orchestrator | 2026-02-18 07:06:51.435984 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-18 07:06:51.435988 | orchestrator | Wednesday 18 February 2026 07:06:23 +0000 (0:00:01.758) 1:15:11.985 **** 2026-02-18 07:06:51.435992 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:51.435995 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:51.435999 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:51.436003 | orchestrator | 2026-02-18 07:06:51.436006 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-18 07:06:51.436010 | orchestrator | Wednesday 18 February 2026 07:06:24 +0000 (0:00:01.442) 1:15:13.428 **** 2026-02-18 07:06:51.436014 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:51.436018 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:51.436022 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:51.436025 | orchestrator | 2026-02-18 07:06:51.436029 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-18 07:06:51.436033 | orchestrator | Wednesday 18 February 2026 07:06:25 +0000 (0:00:01.431) 1:15:14.859 **** 2026-02-18 07:06:51.436037 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:51.436040 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:51.436044 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:51.436048 | orchestrator | 2026-02-18 07:06:51.436052 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-18 07:06:51.436055 | orchestrator | Wednesday 18 February 2026 07:06:27 +0000 (0:00:01.405) 1:15:16.265 **** 2026-02-18 07:06:51.436059 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:51.436063 | orchestrator | skipping: [testbed-node-1] 2026-02-18 07:06:51.436067 | orchestrator | skipping: [testbed-node-2] 2026-02-18 07:06:51.436070 | orchestrator | 2026-02-18 07:06:51.436074 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-18 07:06:51.436078 | orchestrator | Wednesday 18 February 2026 07:06:29 +0000 (0:00:01.742) 1:15:18.007 **** 2026-02-18 07:06:51.436082 | orchestrator | skipping: [testbed-node-0] 2026-02-18 07:06:51.436086 | orchestrator | 2026-02-18 07:06:51.436089 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-18 07:06:51.436093 | orchestrator | 2026-02-18 07:06:51.436097 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-18 07:06:51.436101 | orchestrator | Wednesday 18 February 2026 07:06:30 +0000 (0:00:01.605) 1:15:19.613 **** 2026-02-18 07:06:51.436104 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436108 | orchestrator | 2026-02-18 07:06:51.436112 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-18 07:06:51.436116 | orchestrator | Wednesday 18 February 2026 07:06:32 +0000 (0:00:01.463) 1:15:21.077 **** 2026-02-18 07:06:51.436119 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436123 | orchestrator | 2026-02-18 07:06:51.436127 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-18 07:06:51.436131 | orchestrator | Wednesday 18 February 2026 07:06:33 +0000 (0:00:01.166) 1:15:22.243 **** 2026-02-18 07:06:51.436135 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436138 | orchestrator | 2026-02-18 07:06:51.436142 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-18 07:06:51.436146 | orchestrator | Wednesday 18 February 2026 07:06:34 +0000 (0:00:01.133) 1:15:23.377 **** 2026-02-18 07:06:51.436150 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436153 | orchestrator | 2026-02-18 07:06:51.436157 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-18 07:06:51.436163 | orchestrator | Wednesday 18 February 2026 07:06:37 +0000 (0:00:02.985) 1:15:26.362 **** 2026-02-18 07:06:51.436167 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436171 | orchestrator | 2026-02-18 07:06:51.436175 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-18 07:06:51.436183 | orchestrator | Wednesday 18 February 2026 07:06:40 +0000 (0:00:03.411) 1:15:29.773 **** 2026-02-18 07:06:51.436187 | orchestrator | changed: [testbed-node-0] 2026-02-18 07:06:51.436191 | orchestrator | 2026-02-18 07:06:51.436194 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-18 07:06:51.436198 | orchestrator | 2026-02-18 07:06:51.436202 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-18 07:06:51.436205 | orchestrator | Wednesday 18 February 2026 07:06:42 +0000 (0:00:01.948) 1:15:31.722 **** 2026-02-18 07:06:51.436209 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436213 | orchestrator | ok: [testbed-node-1] 2026-02-18 07:06:51.436217 | orchestrator | ok: [testbed-node-2] 2026-02-18 07:06:51.436221 | orchestrator | 2026-02-18 07:06:51.436224 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-18 07:06:51.436228 | orchestrator | Wednesday 18 February 2026 07:06:44 +0000 (0:00:01.510) 1:15:33.233 **** 2026-02-18 07:06:51.436232 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436236 | orchestrator | 2026-02-18 07:06:51.436239 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-18 07:06:51.436243 | orchestrator | Wednesday 18 February 2026 07:06:46 +0000 (0:00:02.255) 1:15:35.489 **** 2026-02-18 07:06:51.436247 | orchestrator | ok: [testbed-node-0] 2026-02-18 07:06:51.436251 | orchestrator | 2026-02-18 07:06:51.436254 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 07:06:51.436259 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-18 07:06:51.436264 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-18 07:06:51.436269 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-02-18 07:06:51.436273 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-02-18 07:06:51.436279 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-02-18 07:06:52.340640 | orchestrator | testbed-node-3 : ok=316  changed=21  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-02-18 07:06:52.340793 | orchestrator | testbed-node-4 : ok=302  changed=17  unreachable=0 failed=0 skipped=338  rescued=0 ignored=0 2026-02-18 07:06:52.340813 | orchestrator | testbed-node-5 : ok=309  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-18 07:06:52.340825 | orchestrator | 2026-02-18 07:06:52.340837 | orchestrator | 2026-02-18 07:06:52.340848 | orchestrator | 2026-02-18 07:06:52.340859 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 07:06:52.340871 | orchestrator | Wednesday 18 February 2026 07:06:51 +0000 (0:00:04.793) 1:15:40.282 **** 2026-02-18 07:06:52.340882 | orchestrator | =============================================================================== 2026-02-18 07:06:52.340893 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 72.44s 2026-02-18 07:06:52.340905 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 69.51s 2026-02-18 07:06:52.340915 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.14s 2026-02-18 07:06:52.340926 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.79s 2026-02-18 07:06:52.340937 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.20s 2026-02-18 07:06:52.340948 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.17s 2026-02-18 07:06:52.340989 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.04s 2026-02-18 07:06:52.341000 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 27.94s 2026-02-18 07:06:52.341011 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 24.85s 2026-02-18 07:06:52.341021 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.02s 2026-02-18 07:06:52.341032 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.93s 2026-02-18 07:06:52.341043 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 21.95s 2026-02-18 07:06:52.341053 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.19s 2026-02-18 07:06:52.341064 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.62s 2026-02-18 07:06:52.341075 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.04s 2026-02-18 07:06:52.341085 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.67s 2026-02-18 07:06:52.341096 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.48s 2026-02-18 07:06:52.341107 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.30s 2026-02-18 07:06:52.341132 | orchestrator | Stop standby ceph mds -------------------------------------------------- 11.83s 2026-02-18 07:06:52.341144 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.14s 2026-02-18 07:06:52.754008 | orchestrator | + osism apply cephclient 2026-02-18 07:06:54.924045 | orchestrator | 2026-02-18 07:06:54 | INFO  | Task c4cadcb4-39f5-46a1-bb87-134d9ed43fc8 (cephclient) was prepared for execution. 2026-02-18 07:06:54.924129 | orchestrator | 2026-02-18 07:06:54 | INFO  | It takes a moment until task c4cadcb4-39f5-46a1-bb87-134d9ed43fc8 (cephclient) has been started and output is visible here. 2026-02-18 07:07:14.361428 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-18 07:07:14.361548 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-18 07:07:14.361577 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-18 07:07:14.361588 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-18 07:07:14.361611 | orchestrator | 2026-02-18 07:07:14.361623 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-18 07:07:14.361635 | orchestrator | 2026-02-18 07:07:14.361646 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-18 07:07:14.361657 | orchestrator | Wednesday 18 February 2026 07:07:01 +0000 (0:00:01.782) 0:00:01.782 **** 2026-02-18 07:07:14.361669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-18 07:07:14.361681 | orchestrator | 2026-02-18 07:07:14.361693 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-18 07:07:14.361704 | orchestrator | Wednesday 18 February 2026 07:07:02 +0000 (0:00:00.771) 0:00:02.554 **** 2026-02-18 07:07:14.361716 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-18 07:07:14.361727 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-18 07:07:14.361770 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-18 07:07:14.361792 | orchestrator | 2026-02-18 07:07:14.361813 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-18 07:07:14.361831 | orchestrator | Wednesday 18 February 2026 07:07:03 +0000 (0:00:01.693) 0:00:04.248 **** 2026-02-18 07:07:14.361850 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-18 07:07:14.361898 | orchestrator | 2026-02-18 07:07:14.361921 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-18 07:07:14.361940 | orchestrator | Wednesday 18 February 2026 07:07:05 +0000 (0:00:01.072) 0:00:05.320 **** 2026-02-18 07:07:14.361955 | orchestrator | ok: [testbed-manager] 2026-02-18 07:07:14.361966 | orchestrator | 2026-02-18 07:07:14.361977 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-18 07:07:14.361988 | orchestrator | Wednesday 18 February 2026 07:07:06 +0000 (0:00:00.944) 0:00:06.265 **** 2026-02-18 07:07:14.361999 | orchestrator | ok: [testbed-manager] 2026-02-18 07:07:14.362009 | orchestrator | 2026-02-18 07:07:14.362082 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-18 07:07:14.362094 | orchestrator | Wednesday 18 February 2026 07:07:06 +0000 (0:00:00.904) 0:00:07.169 **** 2026-02-18 07:07:14.362105 | orchestrator | ok: [testbed-manager] 2026-02-18 07:07:14.362116 | orchestrator | 2026-02-18 07:07:14.362126 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-18 07:07:14.362137 | orchestrator | Wednesday 18 February 2026 07:07:08 +0000 (0:00:01.151) 0:00:08.320 **** 2026-02-18 07:07:14.362148 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-18 07:07:14.362159 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-18 07:07:14.362170 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-18 07:07:14.362181 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-18 07:07:14.362192 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-18 07:07:14.362202 | orchestrator | 2026-02-18 07:07:14.362213 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-18 07:07:14.362224 | orchestrator | Wednesday 18 February 2026 07:07:12 +0000 (0:00:04.235) 0:00:12.556 **** 2026-02-18 07:07:14.362235 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-18 07:07:14.362247 | orchestrator | 2026-02-18 07:07:14.362258 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-18 07:07:14.362268 | orchestrator | Wednesday 18 February 2026 07:07:12 +0000 (0:00:00.469) 0:00:13.025 **** 2026-02-18 07:07:14.362279 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:07:14.362290 | orchestrator | 2026-02-18 07:07:14.362301 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-18 07:07:14.362312 | orchestrator | Wednesday 18 February 2026 07:07:12 +0000 (0:00:00.162) 0:00:13.188 **** 2026-02-18 07:07:14.362323 | orchestrator | skipping: [testbed-manager] 2026-02-18 07:07:14.362334 | orchestrator | 2026-02-18 07:07:14.362345 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-18 07:07:14.362355 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-18 07:07:14.362367 | orchestrator | 2026-02-18 07:07:14.362378 | orchestrator | 2026-02-18 07:07:14.362389 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-18 07:07:14.362399 | orchestrator | Wednesday 18 February 2026 07:07:14 +0000 (0:00:01.099) 0:00:14.287 **** 2026-02-18 07:07:14.362410 | orchestrator | =============================================================================== 2026-02-18 07:07:14.362436 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.24s 2026-02-18 07:07:14.362447 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.69s 2026-02-18 07:07:14.362458 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.15s 2026-02-18 07:07:14.362468 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.10s 2026-02-18 07:07:14.362479 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.07s 2026-02-18 07:07:14.362490 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.94s 2026-02-18 07:07:14.362520 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2026-02-18 07:07:14.362531 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.77s 2026-02-18 07:07:14.362552 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-02-18 07:07:14.362563 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-02-18 07:07:14.721985 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-18 07:07:14.722137 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-18 07:07:14.733121 | orchestrator | + set -e 2026-02-18 07:07:14.733164 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-18 07:07:14.733177 | orchestrator | ++ export INTERACTIVE=false 2026-02-18 07:07:14.733190 | orchestrator | ++ INTERACTIVE=false 2026-02-18 07:07:14.733201 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-18 07:07:14.733212 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-18 07:07:14.733222 | orchestrator | + source /opt/manager-vars.sh 2026-02-18 07:07:14.733233 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-18 07:07:14.733244 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-18 07:07:14.733254 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-18 07:07:14.733265 | orchestrator | ++ CEPH_VERSION=reef 2026-02-18 07:07:14.733276 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-18 07:07:14.733287 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-18 07:07:14.733298 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-18 07:07:14.733309 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-18 07:07:14.733320 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-18 07:07:14.733331 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-18 07:07:14.733342 | orchestrator | ++ export ARA=false 2026-02-18 07:07:14.733353 | orchestrator | ++ ARA=false 2026-02-18 07:07:14.733364 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-18 07:07:14.733374 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-18 07:07:14.733385 | orchestrator | ++ export TEMPEST=false 2026-02-18 07:07:14.733396 | orchestrator | ++ TEMPEST=false 2026-02-18 07:07:14.733406 | orchestrator | ++ export IS_ZUUL=true 2026-02-18 07:07:14.733417 | orchestrator | ++ IS_ZUUL=true 2026-02-18 07:07:14.733428 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 07:07:14.733439 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2026-02-18 07:07:14.733450 | orchestrator | ++ export EXTERNAL_API=false 2026-02-18 07:07:14.733461 | orchestrator | ++ EXTERNAL_API=false 2026-02-18 07:07:14.733471 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-18 07:07:14.733482 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-18 07:07:14.733493 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-18 07:07:14.733503 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-18 07:07:14.733514 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-18 07:07:14.733525 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-18 07:07:14.733536 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-18 07:07:14.733546 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-18 07:07:14.733557 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-18 07:07:14.733682 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-18 07:07:14.739487 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-18 07:07:14.739566 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-18 07:07:14.739583 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-18 07:07:14.739598 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-18 07:07:36.896114 | orchestrator | 2026-02-18 07:07:36 | ERROR  | Unable to get ansible vault password 2026-02-18 07:07:36.896230 | orchestrator | 2026-02-18 07:07:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-18 07:07:36.896247 | orchestrator | 2026-02-18 07:07:36 | ERROR  | Dropping encrypted entries 2026-02-18 07:07:36.936290 | orchestrator | 2026-02-18 07:07:36 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-18 07:07:36.937077 | orchestrator | 2026-02-18 07:07:36 | INFO  | Kolla configuration check passed 2026-02-18 07:07:37.183218 | orchestrator | 2026-02-18 07:07:37 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-18 07:07:37.200729 | orchestrator | 2026-02-18 07:07:37 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-18 07:07:37.531170 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-18 07:07:58.716287 | orchestrator | 2026-02-18 07:07:58 | ERROR  | Unable to get ansible vault password 2026-02-18 07:07:58.716433 | orchestrator | 2026-02-18 07:07:58 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-18 07:07:58.716452 | orchestrator | 2026-02-18 07:07:58 | ERROR  | Dropping encrypted entries 2026-02-18 07:07:58.749154 | orchestrator | 2026-02-18 07:07:58 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-18 07:07:58.899393 | orchestrator | 2026-02-18 07:07:58 | INFO  | Found 205 classic queue(s) in vhost '/': 2026-02-18 07:07:58.899599 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-18 07:07:58.899626 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-18 07:07:58.899643 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-18 07:07:58.899684 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-18 07:07:58.899703 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - barbican.workers_fanout_4e1601ddeaeb4b13bdc7c231a8a9806b (vhost: /, messages: 0) 2026-02-18 07:07:58.899719 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - barbican.workers_fanout_b0786a12244b457d96dcecf0aa5d58d0 (vhost: /, messages: 0) 2026-02-18 07:07:58.899730 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - barbican.workers_fanout_ec9e6b9d02d24a4bb53178a941f18497 (vhost: /, messages: 0) 2026-02-18 07:07:58.899793 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-18 07:07:58.899819 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central (vhost: /, messages: 0) 2026-02-18 07:07:58.899829 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.899839 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.899849 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.900400 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central_fanout_3ed676f6054f45c8b9ab708ab83b167e (vhost: /, messages: 0) 2026-02-18 07:07:58.900427 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central_fanout_5c7240f506744b5391113b47cd78e7a0 (vhost: /, messages: 0) 2026-02-18 07:07:58.900439 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central_fanout_5d1a8a7f4f2f432e93732e7364bbb59f (vhost: /, messages: 0) 2026-02-18 07:07:58.900451 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central_fanout_8cd93682f1854bc9aa4c27c4d36109eb (vhost: /, messages: 0) 2026-02-18 07:07:58.900462 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central_fanout_af8230690c1047f99adff34948509e85 (vhost: /, messages: 0) 2026-02-18 07:07:58.900474 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - central_fanout_f9dfc608a2714d2ea0cc5378a3145565 (vhost: /, messages: 0) 2026-02-18 07:07:58.900487 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-18 07:07:58.900499 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.900511 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.900917 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.900936 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-backup_fanout_351bbaf3767140b4afc1dff989f73cd3 (vhost: /, messages: 0) 2026-02-18 07:07:58.900971 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-backup_fanout_51158e5991844a8aac80aadc845f0b27 (vhost: /, messages: 0) 2026-02-18 07:07:58.901119 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-backup_fanout_64ee0f30bd054cbab8f2fdae54ed1ad2 (vhost: /, messages: 0) 2026-02-18 07:07:58.901136 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-18 07:07:58.901146 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.901156 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.901425 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.901445 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-scheduler_fanout_93402ee1949e41a48955069e69ec8a2a (vhost: /, messages: 0) 2026-02-18 07:07:58.901455 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-scheduler_fanout_b56d5378c40849fd81b81c3939b64da8 (vhost: /, messages: 0) 2026-02-18 07:07:58.901466 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-scheduler_fanout_f566d43096064221a3372d0a78308b5f (vhost: /, messages: 0) 2026-02-18 07:07:58.901476 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-18 07:07:58.901487 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-18 07:07:58.901497 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.901508 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_431b47c82c094eadb702a6d18e4cd492 (vhost: /, messages: 0) 2026-02-18 07:07:58.901772 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-18 07:07:58.901856 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.901868 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_af9615f504f0495aa741515e3200bb3d (vhost: /, messages: 0) 2026-02-18 07:07:58.901878 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-18 07:07:58.901892 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.901902 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume_fanout_a5534517a56741aaac2e64c831eb3f53 (vhost: /, messages: 0) 2026-02-18 07:07:58.902262 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume_fanout_ad2e226df80040ff960f155cee217633 (vhost: /, messages: 0) 2026-02-18 07:07:58.902283 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - cinder-volume_fanout_b5a435bd85514d5cbd0a65133c937b68 (vhost: /, messages: 0) 2026-02-18 07:07:58.902294 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-18 07:07:58.902305 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-18 07:07:58.902315 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-18 07:07:58.902586 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-18 07:07:58.902605 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - compute_fanout_6f443f5d68cf46bfbc8362196dd4bb00 (vhost: /, messages: 0) 2026-02-18 07:07:58.902643 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - compute_fanout_c6fa091fa9f14a089407c457666211aa (vhost: /, messages: 0) 2026-02-18 07:07:58.902655 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-18 07:07:58.902666 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.902677 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.902995 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.903012 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor_fanout_406f4d68838743ec92a436558995b43f (vhost: /, messages: 0) 2026-02-18 07:07:58.903022 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor_fanout_57b24d4abcc84dc79c90710b328516bf (vhost: /, messages: 0) 2026-02-18 07:07:58.903032 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor_fanout_7f6e0fe3acdc4218be900f40205b7c4a (vhost: /, messages: 0) 2026-02-18 07:07:58.903042 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor_fanout_e4ddbff79d1d491da581e62cdf72d976 (vhost: /, messages: 0) 2026-02-18 07:07:58.903443 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor_fanout_ed26b63fac60498994e6f1830164ee37 (vhost: /, messages: 0) 2026-02-18 07:07:58.903461 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - conductor_fanout_f8e8dd6424f44c5da6e74b06f20fcc86 (vhost: /, messages: 0) 2026-02-18 07:07:58.903471 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - event.sample (vhost: /, messages: 10) 2026-02-18 07:07:58.903480 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-18 07:07:58.903490 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor.53ekvbc46jpn (vhost: /, messages: 0) 2026-02-18 07:07:58.903500 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor.ffr5tdh6lpxb (vhost: /, messages: 0) 2026-02-18 07:07:58.903828 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor.yflbwhv4q52k (vhost: /, messages: 0) 2026-02-18 07:07:58.903845 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_1d1ba967797e406a97ddda4f1b1c4b63 (vhost: /, messages: 0) 2026-02-18 07:07:58.903855 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_31bc2ba6924a4f8d93ee5e6eed771af4 (vhost: /, messages: 0) 2026-02-18 07:07:58.903865 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_3b17ad0b51f64c21a4dd4970c2cb4cc7 (vhost: /, messages: 0) 2026-02-18 07:07:58.903883 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_6ef1da9ff7cd4615b28b9119f2d827e9 (vhost: /, messages: 0) 2026-02-18 07:07:58.904100 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_6f29651a35ba41dd97596dac27c2b527 (vhost: /, messages: 0) 2026-02-18 07:07:58.904119 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_7de54101b48c44b887ddf5722c3efc4c (vhost: /, messages: 0) 2026-02-18 07:07:58.904129 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_872b9622e25b45ecb4885971e570ad41 (vhost: /, messages: 0) 2026-02-18 07:07:58.904139 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_9c97329a37a240518977006d03e1084c (vhost: /, messages: 0) 2026-02-18 07:07:58.904237 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - magnum-conductor_fanout_f546e7ac06bb4c77b399a3c4e671195d (vhost: /, messages: 0) 2026-02-18 07:07:58.904252 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-18 07:07:58.904273 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.904478 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.904495 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.904710 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-data_fanout_0609a6ddc0c34b59b6801683a46a7fa8 (vhost: /, messages: 0) 2026-02-18 07:07:58.904727 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-data_fanout_6042f47f57f8464392356c0112991998 (vhost: /, messages: 0) 2026-02-18 07:07:58.904762 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-data_fanout_d5735462828b4ccdab0418f3b7016d36 (vhost: /, messages: 0) 2026-02-18 07:07:58.904772 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-18 07:07:58.905148 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.905166 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.905176 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.905186 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-scheduler_fanout_5b979735e96d4707b4c6a02dce3bbbc7 (vhost: /, messages: 0) 2026-02-18 07:07:58.905254 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-scheduler_fanout_619bc3a6a53c4088a729f380dd274611 (vhost: /, messages: 0) 2026-02-18 07:07:58.905268 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-scheduler_fanout_89ef459aa93f47e08088b9868bc5ea41 (vhost: /, messages: 0) 2026-02-18 07:07:58.905278 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-18 07:07:58.905288 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-18 07:07:58.905297 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-18 07:07:58.905511 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-18 07:07:58.905528 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-share_fanout_9d83566d1d6047aea6729095e7b58dbf (vhost: /, messages: 0) 2026-02-18 07:07:58.905538 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-share_fanout_d459cadd7624419b8015384ab4eb81a3 (vhost: /, messages: 0) 2026-02-18 07:07:58.905707 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - manila-share_fanout_f0c8e7ebaf224c2cbca91ccc6f553c0b (vhost: /, messages: 0) 2026-02-18 07:07:58.905724 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-18 07:07:58.905734 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-18 07:07:58.905764 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-18 07:07:58.906097 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-18 07:07:58.906120 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-18 07:07:58.906310 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-18 07:07:58.906336 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-18 07:07:58.906357 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-18 07:07:58.906637 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.906655 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.906665 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.907160 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - octavia_provisioning_v2_fanout_19dfa4a0c3244c9fb88f35f882d2de26 (vhost: /, messages: 0) 2026-02-18 07:07:58.907180 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - octavia_provisioning_v2_fanout_50c9c67311294e8e8899b4fa2031a9b4 (vhost: /, messages: 0) 2026-02-18 07:07:58.907190 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - octavia_provisioning_v2_fanout_69728d196dd14c2f9148cb24ef3fbe47 (vhost: /, messages: 0) 2026-02-18 07:07:58.907200 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-18 07:07:58.907209 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.907219 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.907228 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.907238 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer_fanout_0f3461b2b5804ef2a461a6465dd49358 (vhost: /, messages: 0) 2026-02-18 07:07:58.907248 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer_fanout_1076120bcd7e4aa4a843dfc38ef44d23 (vhost: /, messages: 0) 2026-02-18 07:07:58.907317 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer_fanout_36b006d5c4904970823104a26f9f970c (vhost: /, messages: 0) 2026-02-18 07:07:58.907330 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer_fanout_88dc94b2012d4c1e8fe26bedca59587d (vhost: /, messages: 0) 2026-02-18 07:07:58.907421 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer_fanout_9ed75e23bc0c4dad9cf077192dc82628 (vhost: /, messages: 0) 2026-02-18 07:07:58.907434 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - producer_fanout_c5ead2b706fd4c4ca98c390ff7b07583 (vhost: /, messages: 0) 2026-02-18 07:07:58.907443 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-18 07:07:58.907458 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.907618 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.907635 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.907645 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_249c762b45bb4b36900f8c5c193c499a (vhost: /, messages: 0) 2026-02-18 07:07:58.907928 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_2a28a384a630479ea7f2d33b73b5b2d3 (vhost: /, messages: 0) 2026-02-18 07:07:58.907946 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_3c024f3573594bfab8359faaecda54f8 (vhost: /, messages: 0) 2026-02-18 07:07:58.907956 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_490a393278c848b89ba65860ba135f90 (vhost: /, messages: 0) 2026-02-18 07:07:58.907966 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_5f1b287fc7c945739507934f6fd02e9e (vhost: /, messages: 0) 2026-02-18 07:07:58.908494 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_6fd43647045b4061bbca74b4f6895c6e (vhost: /, messages: 0) 2026-02-18 07:07:58.908512 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_9e667ab0baa8468b905b6255f61f267d (vhost: /, messages: 0) 2026-02-18 07:07:58.908522 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_bc33ed1a9e574be3855117642487ac31 (vhost: /, messages: 0) 2026-02-18 07:07:58.908532 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-plugin_fanout_d3a0c06c6c474712b4f156e7b34b4a04 (vhost: /, messages: 0) 2026-02-18 07:07:58.908542 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-18 07:07:58.908552 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.908569 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.908579 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.908589 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_00421dbe32e5481fbebad0b5f8025a61 (vhost: /, messages: 0) 2026-02-18 07:07:58.908599 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_0d98265df4e34505991f07a9b0a3017f (vhost: /, messages: 0) 2026-02-18 07:07:58.908886 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_23ae07bba0af4be1813eaeb6e7b7157b (vhost: /, messages: 0) 2026-02-18 07:07:58.908905 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_2e136915b3be423991c866024e8b3186 (vhost: /, messages: 0) 2026-02-18 07:07:58.908915 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_5ede1c94a855415ebb7f61971a2ee8ef (vhost: /, messages: 0) 2026-02-18 07:07:58.908924 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_8579946706ce4dbfa7678b706824ffbe (vhost: /, messages: 0) 2026-02-18 07:07:58.909278 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_87551d14a1c7494e92b3bc45ad41652b (vhost: /, messages: 0) 2026-02-18 07:07:58.909295 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_8ee49bec71ec437cb3391e7e8aef2ed5 (vhost: /, messages: 0) 2026-02-18 07:07:58.909560 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_96b9e72cf4b344a487632c4ee8a1e022 (vhost: /, messages: 0) 2026-02-18 07:07:58.909578 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_9e59b44326fd4fc6b12b1baeca87463b (vhost: /, messages: 0) 2026-02-18 07:07:58.909589 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_b12fbf47f3794f38a1ad9f325cc58591 (vhost: /, messages: 0) 2026-02-18 07:07:58.909598 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_c52ab0c7276847dd9cd99b4e34b82f16 (vhost: /, messages: 0) 2026-02-18 07:07:58.909608 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_cd4407e28aaf49ecbd8f9044afa8e0c4 (vhost: /, messages: 0) 2026-02-18 07:07:58.909618 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_d844055822dd453887bebf633bc6dbb5 (vhost: /, messages: 0) 2026-02-18 07:07:58.909628 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_f22fadb8690c4d14bf98db3fc568fdcf (vhost: /, messages: 0) 2026-02-18 07:07:58.909637 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_f8a2bd8a83b741a0b460da6858391ea0 (vhost: /, messages: 0) 2026-02-18 07:07:58.909647 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_fb4aee75b5ec4c2eabe12da7c6009e88 (vhost: /, messages: 0) 2026-02-18 07:07:58.909667 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-reports-plugin_fanout_fdfd066420074bd5ad343c40e5878ee9 (vhost: /, messages: 0) 2026-02-18 07:07:58.909770 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-18 07:07:58.909787 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.909797 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.910384 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.910403 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_141567972e9d481ca78b199f81dbad40 (vhost: /, messages: 0) 2026-02-18 07:07:58.910414 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_3e2177dcf6174ef8a6e755e4aa32f76d (vhost: /, messages: 0) 2026-02-18 07:07:58.910423 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_3e43056b1f1b4f64be8d4f7642a5d2bc (vhost: /, messages: 0) 2026-02-18 07:07:58.910433 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_3e530f5d05a640e6b41a668874e0e687 (vhost: /, messages: 0) 2026-02-18 07:07:58.910443 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_5fe6b0cbcdc44e9d87568f00ce28e8a9 (vhost: /, messages: 0) 2026-02-18 07:07:58.910460 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_74e4d1778d524e1dbb75454ca852698a (vhost: /, messages: 0) 2026-02-18 07:07:58.910621 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_c32ccf2693684e04bc0cd7b33938d82c (vhost: /, messages: 0) 2026-02-18 07:07:58.910639 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_c84e50ffff8c4f7884e57cc6c8eb6070 (vhost: /, messages: 0) 2026-02-18 07:07:58.910649 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - q-server-resource-versions_fanout_df494338a2684515a9a20e10ef92e0fd (vhost: /, messages: 0) 2026-02-18 07:07:58.910658 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_1a51066af032488a915e82dd546b2930 (vhost: /, messages: 1) 2026-02-18 07:07:58.910668 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_212cd0c314cf4b08a8516ba230f4b080 (vhost: /, messages: 0) 2026-02-18 07:07:58.910678 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_3ad832e1aed24eec8b17ce27381a0600 (vhost: /, messages: 0) 2026-02-18 07:07:58.910992 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_48644c541c694637addffdddfa790dd2 (vhost: /, messages: 0) 2026-02-18 07:07:58.911010 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_4b12d3ff6d614566885eebd541a2a7b6 (vhost: /, messages: 0) 2026-02-18 07:07:58.911021 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_4efe2f86d9484841ae06150740974bd6 (vhost: /, messages: 0) 2026-02-18 07:07:58.911030 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_52b98223a7fe401b9b1f909a4e5b0677 (vhost: /, messages: 0) 2026-02-18 07:07:58.911040 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_5b07aa1ca93a4a30b8de734313babe6e (vhost: /, messages: 0) 2026-02-18 07:07:58.911049 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_62f9f2b26e514f798e3e5ebd6f58a795 (vhost: /, messages: 0) 2026-02-18 07:07:58.911059 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_70627cfd80834920acbc6b09ad50111c (vhost: /, messages: 0) 2026-02-18 07:07:58.912311 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_9719120530514b8ba2f1249e78ebdcff (vhost: /, messages: 0) 2026-02-18 07:07:58.915276 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_cec9aacde07f42bf864414c8fb4696ff (vhost: /, messages: 0) 2026-02-18 07:07:58.915298 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_e45ca6c1c2fc4b72a7cbe01838a774a8 (vhost: /, messages: 0) 2026-02-18 07:07:58.915308 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_f2a5590b422c4a8c86692f798c6a873c (vhost: /, messages: 0) 2026-02-18 07:07:58.915318 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_f4f00e64229148d7955fce1f2a34611e (vhost: /, messages: 0) 2026-02-18 07:07:58.915327 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_f5221cd6e97e4d999c7e60bb399dc81a (vhost: /, messages: 0) 2026-02-18 07:07:58.915337 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_f966617d455540f8baf0c0683d31b7f2 (vhost: /, messages: 0) 2026-02-18 07:07:58.915346 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - reply_fd5ab57060af4427878fe444a2687955 (vhost: /, messages: 0) 2026-02-18 07:07:58.915356 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-18 07:07:58.915366 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.915375 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.915385 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.915395 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler_fanout_42799eae735b478aa2b1f438ef6e004c (vhost: /, messages: 0) 2026-02-18 07:07:58.915405 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler_fanout_45d610d7e1484bdc80accfd0f534582f (vhost: /, messages: 0) 2026-02-18 07:07:58.915414 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler_fanout_634c0f45ed6749f5bace84d285cee427 (vhost: /, messages: 0) 2026-02-18 07:07:58.915424 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler_fanout_c05ec02136e1415887122721105ebc7d (vhost: /, messages: 0) 2026-02-18 07:07:58.915434 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler_fanout_ce2341ac3f294c72a0d38794b0709183 (vhost: /, messages: 0) 2026-02-18 07:07:58.915443 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - scheduler_fanout_d0d06d480f544c01af3246377100b7d7 (vhost: /, messages: 0) 2026-02-18 07:07:58.915467 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-18 07:07:58.915477 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-18 07:07:58.915487 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-18 07:07:58.915497 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-18 07:07:58.915506 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker_fanout_0a0619b1e8d8440b9169610ed8282c18 (vhost: /, messages: 0) 2026-02-18 07:07:58.915516 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker_fanout_2356e995a55746549e710ee9fc655eb9 (vhost: /, messages: 0) 2026-02-18 07:07:58.915526 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker_fanout_3d99964356964c409ad4c864f1d37c9a (vhost: /, messages: 0) 2026-02-18 07:07:58.915535 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker_fanout_4a35d96c14d34b3c90291d583cebc13e (vhost: /, messages: 0) 2026-02-18 07:07:58.915545 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker_fanout_52eaa4c794ab486e829c4d30161c724c (vhost: /, messages: 0) 2026-02-18 07:07:58.915554 | orchestrator | 2026-02-18 07:07:58 | INFO  |  - worker_fanout_e03e21c4c0884e3a9215b58618a722a9 (vhost: /, messages: 0) 2026-02-18 07:07:59.263465 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-18 07:08:01.327422 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-18 07:08:01.327518 | orchestrator | [--no-close-connections] [--quorum] 2026-02-18 07:08:01.327536 | orchestrator | [--vhost VHOST] 2026-02-18 07:08:01.327549 | orchestrator | [{list,delete,prepare,check}] 2026-02-18 07:08:01.327562 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-18 07:08:01.327575 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-18 07:08:02.102219 | orchestrator | ERROR 2026-02-18 07:08:02.102433 | orchestrator | { 2026-02-18 07:08:02.102471 | orchestrator | "delta": "2:05:37.378674", 2026-02-18 07:08:02.102494 | orchestrator | "end": "2026-02-18 07:08:01.640268", 2026-02-18 07:08:02.102514 | orchestrator | "msg": "non-zero return code", 2026-02-18 07:08:02.102532 | orchestrator | "rc": 2, 2026-02-18 07:08:02.102550 | orchestrator | "start": "2026-02-18 05:02:24.261594" 2026-02-18 07:08:02.102596 | orchestrator | } failure 2026-02-18 07:08:02.350276 | 2026-02-18 07:08:02.350396 | PLAY RECAP 2026-02-18 07:08:02.350452 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-18 07:08:02.350476 | 2026-02-18 07:08:02.589367 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-18 07:08:02.590608 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-18 07:08:03.368790 | 2026-02-18 07:08:03.369024 | PLAY [Post output play] 2026-02-18 07:08:03.387937 | 2026-02-18 07:08:03.388105 | LOOP [stage-output : Register sources] 2026-02-18 07:08:03.458227 | 2026-02-18 07:08:03.458551 | TASK [stage-output : Check sudo] 2026-02-18 07:08:04.323120 | orchestrator | sudo: a password is required 2026-02-18 07:08:04.497060 | orchestrator | ok: Runtime: 0:00:00.015283 2026-02-18 07:08:04.511673 | 2026-02-18 07:08:04.511845 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-18 07:08:04.545859 | 2026-02-18 07:08:04.546098 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-18 07:08:04.608049 | orchestrator | ok 2026-02-18 07:08:04.614066 | 2026-02-18 07:08:04.614182 | LOOP [stage-output : Ensure target folders exist] 2026-02-18 07:08:05.054798 | orchestrator | ok: "docs" 2026-02-18 07:08:05.055126 | 2026-02-18 07:08:05.334015 | orchestrator | ok: "artifacts" 2026-02-18 07:08:05.575306 | orchestrator | ok: "logs" 2026-02-18 07:08:05.597980 | 2026-02-18 07:08:05.598162 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-18 07:08:05.637397 | 2026-02-18 07:08:05.637703 | TASK [stage-output : Make all log files readable] 2026-02-18 07:08:05.932106 | orchestrator | ok 2026-02-18 07:08:05.940805 | 2026-02-18 07:08:05.940944 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-18 07:08:05.975312 | orchestrator | skipping: Conditional result was False 2026-02-18 07:08:05.991819 | 2026-02-18 07:08:05.991982 | TASK [stage-output : Discover log files for compression] 2026-02-18 07:08:06.016645 | orchestrator | skipping: Conditional result was False 2026-02-18 07:08:06.030153 | 2026-02-18 07:08:06.030311 | LOOP [stage-output : Archive everything from logs] 2026-02-18 07:08:06.072445 | 2026-02-18 07:08:06.072665 | PLAY [Post cleanup play] 2026-02-18 07:08:06.081221 | 2026-02-18 07:08:06.081324 | TASK [Set cloud fact (Zuul deployment)] 2026-02-18 07:08:06.137231 | orchestrator | ok 2026-02-18 07:08:06.148112 | 2026-02-18 07:08:06.148236 | TASK [Set cloud fact (local deployment)] 2026-02-18 07:08:06.182128 | orchestrator | skipping: Conditional result was False 2026-02-18 07:08:06.195818 | 2026-02-18 07:08:06.195959 | TASK [Clean the cloud environment] 2026-02-18 07:08:06.894334 | orchestrator | 2026-02-18 07:08:06 - clean up servers 2026-02-18 07:08:08.104665 | orchestrator | 2026-02-18 07:08:08 - testbed-manager 2026-02-18 07:08:08.243365 | orchestrator | 2026-02-18 07:08:08 - testbed-node-2 2026-02-18 07:08:08.335237 | orchestrator | 2026-02-18 07:08:08 - testbed-node-1 2026-02-18 07:08:08.427222 | orchestrator | 2026-02-18 07:08:08 - testbed-node-0 2026-02-18 07:08:08.520684 | orchestrator | 2026-02-18 07:08:08 - testbed-node-3 2026-02-18 07:08:08.614647 | orchestrator | 2026-02-18 07:08:08 - testbed-node-5 2026-02-18 07:08:08.714427 | orchestrator | 2026-02-18 07:08:08 - testbed-node-4 2026-02-18 07:08:08.809052 | orchestrator | 2026-02-18 07:08:08 - clean up keypairs 2026-02-18 07:08:08.831521 | orchestrator | 2026-02-18 07:08:08 - testbed 2026-02-18 07:08:08.856237 | orchestrator | 2026-02-18 07:08:08 - wait for servers to be gone 2026-02-18 07:08:19.831546 | orchestrator | 2026-02-18 07:08:19 - clean up ports 2026-02-18 07:08:20.028589 | orchestrator | 2026-02-18 07:08:20 - 2c9b5195-2c55-4a4c-98ee-7f7c48227a0a 2026-02-18 07:08:20.303898 | orchestrator | 2026-02-18 07:08:20 - 43789de1-c7eb-482e-b0c2-6a9a9ca8ef74 2026-02-18 07:08:20.616820 | orchestrator | 2026-02-18 07:08:20 - 487c7969-637c-403a-809d-5cddb6860d09 2026-02-18 07:08:21.060107 | orchestrator | 2026-02-18 07:08:21 - 5733f1ab-da4f-48fc-94fd-b28accc67720 2026-02-18 07:08:21.258530 | orchestrator | 2026-02-18 07:08:21 - 68defc63-2716-4bd4-9fc4-ed87599cf104 2026-02-18 07:08:21.486730 | orchestrator | 2026-02-18 07:08:21 - 9350da0b-78ba-4dc7-ac35-5d081185d7d2 2026-02-18 07:08:21.700621 | orchestrator | 2026-02-18 07:08:21 - db4addd0-08e7-460a-b76c-0bbf3ad9ecc7 2026-02-18 07:08:22.370106 | orchestrator | 2026-02-18 07:08:22 - clean up volumes 2026-02-18 07:08:22.485523 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-4-node-base 2026-02-18 07:08:22.524251 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-2-node-base 2026-02-18 07:08:22.565377 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-0-node-base 2026-02-18 07:08:22.605245 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-1-node-base 2026-02-18 07:08:22.644412 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-3-node-base 2026-02-18 07:08:22.690495 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-5-node-base 2026-02-18 07:08:22.737527 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-7-node-4 2026-02-18 07:08:22.783391 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-2-node-5 2026-02-18 07:08:22.827875 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-manager-base 2026-02-18 07:08:22.873926 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-3-node-3 2026-02-18 07:08:22.920878 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-4-node-4 2026-02-18 07:08:22.964579 | orchestrator | 2026-02-18 07:08:22 - testbed-volume-1-node-4 2026-02-18 07:08:23.012104 | orchestrator | 2026-02-18 07:08:23 - testbed-volume-8-node-5 2026-02-18 07:08:23.058118 | orchestrator | 2026-02-18 07:08:23 - testbed-volume-0-node-3 2026-02-18 07:08:23.100031 | orchestrator | 2026-02-18 07:08:23 - testbed-volume-6-node-3 2026-02-18 07:08:23.145012 | orchestrator | 2026-02-18 07:08:23 - testbed-volume-5-node-5 2026-02-18 07:08:23.186799 | orchestrator | 2026-02-18 07:08:23 - disconnect routers 2026-02-18 07:08:23.300619 | orchestrator | 2026-02-18 07:08:23 - testbed 2026-02-18 07:08:24.463178 | orchestrator | 2026-02-18 07:08:24 - clean up subnets 2026-02-18 07:08:24.514121 | orchestrator | 2026-02-18 07:08:24 - subnet-testbed-management 2026-02-18 07:08:24.692094 | orchestrator | 2026-02-18 07:08:24 - clean up networks 2026-02-18 07:08:24.882263 | orchestrator | 2026-02-18 07:08:24 - net-testbed-management 2026-02-18 07:08:25.174367 | orchestrator | 2026-02-18 07:08:25 - clean up security groups 2026-02-18 07:08:25.221218 | orchestrator | 2026-02-18 07:08:25 - testbed-node 2026-02-18 07:08:25.338397 | orchestrator | 2026-02-18 07:08:25 - testbed-management 2026-02-18 07:08:25.447851 | orchestrator | 2026-02-18 07:08:25 - clean up floating ips 2026-02-18 07:08:25.479584 | orchestrator | 2026-02-18 07:08:25 - 81.163.193.189 2026-02-18 07:08:25.837959 | orchestrator | 2026-02-18 07:08:25 - clean up routers 2026-02-18 07:08:25.936051 | orchestrator | 2026-02-18 07:08:25 - testbed 2026-02-18 07:08:26.785642 | orchestrator | ok: Runtime: 0:00:20.196651 2026-02-18 07:08:26.788309 | 2026-02-18 07:08:26.788413 | PLAY RECAP 2026-02-18 07:08:26.788491 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-18 07:08:26.788526 | 2026-02-18 07:08:26.918427 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-18 07:08:26.920903 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-18 07:08:27.662738 | 2026-02-18 07:08:27.662974 | PLAY [Cleanup play] 2026-02-18 07:08:27.679089 | 2026-02-18 07:08:27.679221 | TASK [Set cloud fact (Zuul deployment)] 2026-02-18 07:08:27.732333 | orchestrator | ok 2026-02-18 07:08:27.740421 | 2026-02-18 07:08:27.740546 | TASK [Set cloud fact (local deployment)] 2026-02-18 07:08:27.764381 | orchestrator | skipping: Conditional result was False 2026-02-18 07:08:27.778751 | 2026-02-18 07:08:27.778929 | TASK [Clean the cloud environment] 2026-02-18 07:08:28.938198 | orchestrator | 2026-02-18 07:08:28 - clean up servers 2026-02-18 07:08:29.406388 | orchestrator | 2026-02-18 07:08:29 - clean up keypairs 2026-02-18 07:08:29.426903 | orchestrator | 2026-02-18 07:08:29 - wait for servers to be gone 2026-02-18 07:08:29.475518 | orchestrator | 2026-02-18 07:08:29 - clean up ports 2026-02-18 07:08:29.556240 | orchestrator | 2026-02-18 07:08:29 - clean up volumes 2026-02-18 07:08:29.620938 | orchestrator | 2026-02-18 07:08:29 - disconnect routers 2026-02-18 07:08:29.652846 | orchestrator | 2026-02-18 07:08:29 - clean up subnets 2026-02-18 07:08:29.677907 | orchestrator | 2026-02-18 07:08:29 - clean up networks 2026-02-18 07:08:30.328264 | orchestrator | 2026-02-18 07:08:30 - clean up security groups 2026-02-18 07:08:30.366403 | orchestrator | 2026-02-18 07:08:30 - clean up floating ips 2026-02-18 07:08:30.393059 | orchestrator | 2026-02-18 07:08:30 - clean up routers 2026-02-18 07:08:30.816921 | orchestrator | ok: Runtime: 0:00:01.860948 2026-02-18 07:08:30.820866 | 2026-02-18 07:08:30.821045 | PLAY RECAP 2026-02-18 07:08:30.821176 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-18 07:08:30.821240 | 2026-02-18 07:08:30.942195 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-18 07:08:30.944467 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-18 07:08:31.717548 | 2026-02-18 07:08:31.717739 | PLAY [Base post-fetch] 2026-02-18 07:08:31.733418 | 2026-02-18 07:08:31.733598 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-18 07:08:31.809967 | orchestrator | skipping: Conditional result was False 2026-02-18 07:08:31.825185 | 2026-02-18 07:08:31.825428 | TASK [fetch-output : Set log path for single node] 2026-02-18 07:08:31.884022 | orchestrator | ok 2026-02-18 07:08:31.892730 | 2026-02-18 07:08:31.892874 | LOOP [fetch-output : Ensure local output dirs] 2026-02-18 07:08:32.368361 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/work/logs" 2026-02-18 07:08:32.654137 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/work/artifacts" 2026-02-18 07:08:32.923514 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dcb102d1513646059c8b4086c535c802/work/docs" 2026-02-18 07:08:32.936472 | 2026-02-18 07:08:32.936611 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-18 07:08:33.862080 | orchestrator | changed: .d..t...... ./ 2026-02-18 07:08:33.862395 | orchestrator | changed: All items complete 2026-02-18 07:08:33.862448 | 2026-02-18 07:08:34.602889 | orchestrator | changed: .d..t...... ./ 2026-02-18 07:08:35.361848 | orchestrator | changed: .d..t...... ./ 2026-02-18 07:08:35.395347 | 2026-02-18 07:08:35.395503 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-18 07:08:35.433107 | orchestrator | skipping: Conditional result was False 2026-02-18 07:08:35.435370 | orchestrator | skipping: Conditional result was False 2026-02-18 07:08:35.451897 | 2026-02-18 07:08:35.452025 | PLAY RECAP 2026-02-18 07:08:35.452106 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-18 07:08:35.452149 | 2026-02-18 07:08:35.581361 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-18 07:08:35.582673 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-18 07:08:36.320819 | 2026-02-18 07:08:36.321000 | PLAY [Base post] 2026-02-18 07:08:36.336059 | 2026-02-18 07:08:36.336218 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-18 07:08:37.337640 | orchestrator | changed 2026-02-18 07:08:37.346802 | 2026-02-18 07:08:37.346955 | PLAY RECAP 2026-02-18 07:08:37.347024 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-18 07:08:37.347093 | 2026-02-18 07:08:37.469968 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-18 07:08:37.472396 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-18 07:08:38.267649 | 2026-02-18 07:08:38.267823 | PLAY [Base post-logs] 2026-02-18 07:08:38.278412 | 2026-02-18 07:08:38.278590 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-18 07:08:38.720508 | localhost | changed 2026-02-18 07:08:38.730445 | 2026-02-18 07:08:38.730642 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-18 07:08:38.767448 | localhost | ok 2026-02-18 07:08:38.772144 | 2026-02-18 07:08:38.772267 | TASK [Set zuul-log-path fact] 2026-02-18 07:08:38.788794 | localhost | ok 2026-02-18 07:08:38.800234 | 2026-02-18 07:08:38.800366 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-18 07:08:38.826309 | localhost | ok 2026-02-18 07:08:38.830650 | 2026-02-18 07:08:38.830791 | TASK [upload-logs : Create log directories] 2026-02-18 07:08:39.340834 | localhost | changed 2026-02-18 07:08:39.346343 | 2026-02-18 07:08:39.346505 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-18 07:08:39.848957 | localhost -> localhost | ok: Runtime: 0:00:00.007395 2026-02-18 07:08:39.852957 | 2026-02-18 07:08:39.853064 | TASK [upload-logs : Upload logs to log server] 2026-02-18 07:08:40.410444 | localhost | Output suppressed because no_log was given 2026-02-18 07:08:40.414696 | 2026-02-18 07:08:40.414919 | LOOP [upload-logs : Compress console log and json output] 2026-02-18 07:08:40.476675 | localhost | skipping: Conditional result was False 2026-02-18 07:08:40.482359 | localhost | skipping: Conditional result was False 2026-02-18 07:08:40.490134 | 2026-02-18 07:08:40.490390 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-18 07:08:40.559338 | localhost | skipping: Conditional result was False 2026-02-18 07:08:40.560204 | 2026-02-18 07:08:40.563046 | localhost | skipping: Conditional result was False 2026-02-18 07:08:40.576364 | 2026-02-18 07:08:40.576678 | LOOP [upload-logs : Upload console log and json output]